Tightly-coupled stereo vision-aided inertial navigation using feature-based motion sensors

E. Asadi, C. L. Bottasso

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

A tightly-coupled stereo vision-aided inertial navigation system is proposed in this work, as a synergistic incorporation of vision with other sensors. In order to avoid loss of information possibly resulting by visual preprocessing, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to three-dimensional (3D) reconstructed feature points.Akinematic model accounts for the vehicle motion, and a Sigma-Point Kalman filter is used to achieve a robust state estimation in the presence of non-linearities. The proposed formulation is derived for a general platform-independent 3D problem, and it is tested and demonstrated with a real dynamic indoor data-set alongside of a simulation experiment. Results show improved estimates than in the case of a classical visual odometry approach and of a loosely-coupled stereo vision-aided inertial navigation system, even in GPS (Global Positioning System)-denied conditions and when magnetometer measurements are not reliable.

Original languageEnglish
Pages (from-to)717-729
Number of pages13
JournalAdvanced Robotics
Volume28
Issue number11
DOIs
StatePublished - 2014

Keywords

  • Sensor fusion
  • Tight-coupling
  • Trifocal constraint
  • Vision-aided inertial navigation

Fingerprint

Dive into the research topics of 'Tightly-coupled stereo vision-aided inertial navigation using feature-based motion sensors'. Together they form a unique fingerprint.

Cite this