On computational methods for nonlinear estimation
Abstract: The Bayesian approach provides a rather powerful framework for handling nonlinear, as well as linear, estimation problems. We can in fact pose a general solution to the nonlinear estimation problem. However, in the general case there does not exist any closed-form solution and we are forced to use approximate techniques. In this thesis we will study one such technique, the sequential Monte Carlo method, commonly referred to as the particle filter. Some work on linear stochastic differential-algebraic equations and constrained estimation using convex optimization will also be presented.The sequential Monte Carlo method offers a systematic framework for handling estimation of nonlinear systems subject to non-Gaussian noise. Its main drawback is that it requires a lot of computational power. We will use the particle filter both for the nonlinear state estimation problem and the nonlinear system identification problem. The details for the marginalized (Rao-Blackwellized) particle filter applied to a general nonlinear state-space model will also be given.General approaches to modeling, for instance using object-oriented software, lead to differential-algebraic equations. One of the topics in this thesis is to extend the standard Kalman filtering theory to the class of linear differential-algebraic equations, by showing how to incorporate white noise in this type of equations.There will also be a discussion on how to use convex optimization for solving the estimation problem. For linear state-space models with Gaussian noise the Kalman filter computes the maximum a posteriori estimate. We interpret the Kalman filter as the solution to a convex optimization problem, and show that we can generalize the maximum a posteriori state estimator to any noise with log-concave probability density function and any combination of linear equality and convex inequality constraints.
CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)