TY - JOUR
T1 - Direct Sparse Odometry
AU - Engel, Jakob
AU - Koltun, Vladlen
AU - Cremers, Daniel
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2018/3/1
Y1 - 2018/3/1
N2 - Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-And camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-The-Art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.
AB - Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-And camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-The-Art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.
KW - 3D reconstruction
KW - SLAM
KW - Visual odometry
KW - structure from motion
UR - http://www.scopus.com/inward/record.url?scp=85041956460&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2017.2658577
DO - 10.1109/TPAMI.2017.2658577
M3 - Article
C2 - 28422651
AN - SCOPUS:85041956460
SN - 0162-8828
VL - 40
SP - 611
EP - 625
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 3
M1 - 7898369
ER -