TY - GEN
T1 - StaticFusion
T2 - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
AU - Scona, Raluca
AU - Jaimez, Mariano
AU - Petillot, Yvan R.
AU - Fallon, Maurice
AU - Cremers, Daniel
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/9/10
Y1 - 2018/9/10
N2 - Dynamic environments are challenging for visual SLAM as moving objects can impair camera pose tracking and cause corruptions to be integrated into the map. In this paper, we propose a method for robust dense RGB-D SLAM in dynamic environments which detects moving objects and simultaneously reconstructs the background structure. While most methods employ implicit robust penalisers or outlier filtering techniques in order to handle moving objects, our approach is to simultaneously estimate the camera motion as well as a probabilistic static/dynamic segmentation of the current RGB-D image pair. This segmentation is then used for weighted dense RGB-D fusion to estimate a 3D model of only the static parts of the environment. By leveraging the 3D model for frame-to-model alignment, as well as static/dynamic segmentation, camera motion estimation has reduced overall drift - as well as being more robust to the presence of dynamics in the scene. Demonstrations are presented which compare the proposed method to related state-of-the-art approaches using both static and dynamic sequences. The proposed method achieves similar performance in static environments and improved accuracy and robustness in dynamic scenes.
AB - Dynamic environments are challenging for visual SLAM as moving objects can impair camera pose tracking and cause corruptions to be integrated into the map. In this paper, we propose a method for robust dense RGB-D SLAM in dynamic environments which detects moving objects and simultaneously reconstructs the background structure. While most methods employ implicit robust penalisers or outlier filtering techniques in order to handle moving objects, our approach is to simultaneously estimate the camera motion as well as a probabilistic static/dynamic segmentation of the current RGB-D image pair. This segmentation is then used for weighted dense RGB-D fusion to estimate a 3D model of only the static parts of the environment. By leveraging the 3D model for frame-to-model alignment, as well as static/dynamic segmentation, camera motion estimation has reduced overall drift - as well as being more robust to the presence of dynamics in the scene. Demonstrations are presented which compare the proposed method to related state-of-the-art approaches using both static and dynamic sequences. The proposed method achieves similar performance in static environments and improved accuracy and robustness in dynamic scenes.
UR - https://www.scopus.com/pages/publications/85062263525
U2 - 10.1109/ICRA.2018.8460681
DO - 10.1109/ICRA.2018.8460681
M3 - Conference contribution
AN - SCOPUS:85062263525
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 3849
EP - 3856
BT - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 May 2018 through 25 May 2018
ER -