Selected Topics in Inertial and Visual Sensor Fusion : Calibration, Observability Analysis and Applications

Abstract: Recent improvements in the development of inertial and visual sensors allow building small, lightweight, and cheap motion capture systems, which are becoming a standard feature of smartphones and personal digital assistants. This dissertation describes developments of new motion sensing strategies using the inertial and inertial-visual sensors.The thesis contributions are presented in two parts. The first part focuses mainly on the use of inertial measurement units. First, the problem of sensor calibration is addressed and a low-cost and accurate method to calibrate the accelerometer cluster of this unit is proposed. The method is based on the maximum likelihood estimation framework, which results in a minimum variance unbiased estimator.Then using the inertial measurement unit, a probabilistic user-independent method is proposed for pedestrian activity classification and gait analysis.The work targets two groups of applications including human activity classificationand joint human activity and gait-phase classification.The developed methods are based on continuous hidden Markov models. The achieved relative figure-of-merits using the collected data validate the reliability of the proposed methods for the desired applications.In the second part, the problem of inertial and visual sensor fusion is studied.This part describes the contributions related to sensor calibration, motion estimation,and observability analysis. The proposed visual-inertial schemes in this part can mainly be divided into three systems. For each system, an estimation approach is proposed and its observability properties are analyzed.Moreover, the performances of the proposed methods are illustrated using both simulations and experimental data. Firstly, a novel calibration scheme is proposed to estimate the relative transformation between the inertial and visual sensors, which are rigidly mounted together. The main advantage of the developed method is that the calibration is performed using a planar mirror instead of using a calibration pattern.By performing the observability analysis for this system, it is proved that the calibration parameters are observable. Moreover, the achieved results show subcentimeter and subdegree accuracy for the calibration parameters.Secondly, an ego-motion estimation approach is introduced that is based on using horizontal plane features where the camera is restricted to be downward looking. The observability properties of this system are then analyzed when only one feature point is used.In particular, it is proved that the system has only three unobservable directions corresponding to global translations parallel to the horizontal plane, and rotations around the gravity vector.Hence, compared to general visual-inertial navigation systems, an advantage of the proposed system is that the vertical translation becomes observable.Finally, a 6-DoF positioning system is developed based on using only planar features on a desired horizontal plane. Compared to the previously mentioned approach, the restriction of using a downward looking camera is relaxed, while the observability properties of the system are preserved.The achieved results indicate promising accuracy and reliability of the proposed algorithm and validate the findings of the theoretical analysis and 6-DoF motion estimation.The proposed motion estimation approach is then extended by developing a new planar feature detection method. Hence, a complete positioning approach is introduced, which simultaneously performs 6-DoF motion estimation and horizontal plane feature detection.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)