||This project considers the identification of parametric linear dynamic models using so-called predicition error methods. In prediction error indentification, the parametric models that are identified on the basis of measurement data are usually accompanied by an indication of their reliability. Probabilistic confidence regions for the estimated parameters are generally used as an indication of this reliability (or precision). A 100(1-a)% confidence region is a region in the parameter space that attempts to "cover" the true parameter with probability (1-a). These regions are commonly constructed on the basis of prior information on the data generating system and the noise disturbances acting on the measurement data.
Apart from its intrinsic importance in classical statistical parameter estimation, the need for quantifying model uncertainties has lately become apparent also in many other fields of model
applications. When identified models are used as a basis for model-based control, monitoring, simulation or any other model-based decision-making, then robustness requirements impose
additional constraints on model uncertainties, which can be taken into account to guarantee robustness properties of the designed algorithms.
Different methods to construct confidence regions for the estimated parameters exist. Confidence regions are most commonly derived from the (asymptotic) statistical properties of the parameter estimator. Alternatively, the (asymptotic) statistics of the so-called Fisher score and likelihood ratio may be used as a basis for constructing confidence regions. Yet other methods may be derived. The goal of this project is to evaluate, validate and compare the reliability of different methods for constructing confidence regions (for finite data lengths) for various model structures used in Prediction Error identification (e.g., ARX, Output Error and Box-Jenkins). This may be done by means of well-chosen Monte-Carlo simulation experiments, recording whether the true values of the parameters are contained within the confidence regions for each realization of the data. An additional research question to be answered is whether diagnostic tools can be derived to indicate when the confidence regions considered may result in suboptimal inferences.