TY - GEN
T1 - Direct visual-inertial odometry with stereo cameras
AU - Usenko, Vladyslav
AU - Engel, Jakob
AU - Stuckler, Jörg
AU - Cremers, Daniel
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/6/8
Y1 - 2016/6/8
N2 - We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.
AB - We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.
UR - http://www.scopus.com/inward/record.url?scp=84977590728&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2016.7487335
DO - 10.1109/ICRA.2016.7487335
M3 - Conference contribution
AN - SCOPUS:84977590728
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 1885
EP - 1892
BT - 2016 IEEE International Conference on Robotics and Automation, ICRA 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE International Conference on Robotics and Automation, ICRA 2016
Y2 - 16 May 2016 through 21 May 2016
ER -