TY - GEN
T1 - Event-based 3D SLAM with a depth-augmented dynamic vision sensor
AU - Weikersdorfer, David
AU - Adrian, David B.
AU - Cremers, Daniel
AU - Conradt, Jorg
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/9/22
Y1 - 2014/9/22
N2 - We present the D-eDVS- a combined event-based 3D sensor - and a novel event-based full-3D simultaneous localization and mapping algorithm which works exclusively with the sparse stream of visual data provided by the D-eDVS. The D-eDVS is a combination of the established PrimeSense RGB-D sensor and a biologically inspired embedded dynamic vision sensor. Dynamic vision sensors only react to dynamic contrast changes and output data in form of a sparse stream of events which represent individual pixel locations. We demonstrate how an event-based dynamic vision sensor can be fused with a classic frame-based RGB-D sensor to produce a sparse stream of depth-augmented 3D points. The advantages of a sparse, event-based stream are a much smaller amount of generated data, thus more efficient resource usage, and a continuous representation of motion allowing lag-free tracking. Our event-based SLAM algorithm is highly efficient and runs 20 times faster than realtime, provides localization updates at several hundred Hertz, and produces excellent results. We compare our method against ground truth from an external tracking system and two state-of-the-art algorithms on a new dataset which we release in combination with this paper.
AB - We present the D-eDVS- a combined event-based 3D sensor - and a novel event-based full-3D simultaneous localization and mapping algorithm which works exclusively with the sparse stream of visual data provided by the D-eDVS. The D-eDVS is a combination of the established PrimeSense RGB-D sensor and a biologically inspired embedded dynamic vision sensor. Dynamic vision sensors only react to dynamic contrast changes and output data in form of a sparse stream of events which represent individual pixel locations. We demonstrate how an event-based dynamic vision sensor can be fused with a classic frame-based RGB-D sensor to produce a sparse stream of depth-augmented 3D points. The advantages of a sparse, event-based stream are a much smaller amount of generated data, thus more efficient resource usage, and a continuous representation of motion allowing lag-free tracking. Our event-based SLAM algorithm is highly efficient and runs 20 times faster than realtime, provides localization updates at several hundred Hertz, and produces excellent results. We compare our method against ground truth from an external tracking system and two state-of-the-art algorithms on a new dataset which we release in combination with this paper.
UR - http://www.scopus.com/inward/record.url?scp=84929224896&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2014.6906882
DO - 10.1109/ICRA.2014.6906882
M3 - Conference contribution
AN - SCOPUS:84929224896
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 359
EP - 364
BT - Proceedings - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE International Conference on Robotics and Automation, ICRA 2014
Y2 - 31 May 2014 through 7 June 2014
ER -