TY - JOUR
T1 - Volumetric occupancy mapping with probabilistic depth completion for robotic navigation
AU - Popovic, Marija
AU - Thomas, Florian
AU - Papatheodorou, Sotiris
AU - Funk, Nils
AU - Vidal-Calleja, Teresa
AU - Leutenegger, Stefan
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2021/7
Y1 - 2021/7
N2 - In robotic applications, a key requirement for safe and efficient motion planning is the ability to map obstacle-free space in unknown, cluttered 3D environments. However, commodity-grade RGB-D cameras commonly used for sensing fail to register valid depth values on shiny, glossy, bright, or distant surfaces, leading to missing data in the map. To address this issue, we propose a framework leveraging probabilistic depth completion as an additional input for spatial mapping. We introduce a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images. Our pipeline exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping. Evaluations on synthetic data show that our approach maps significantly more correct free space with relatively low error when compared against using raw data alone in different indoor environments; thereby producing more complete maps that can be directly used for robotic navigation tasks. The performance of our framework is validated using real-world data.
AB - In robotic applications, a key requirement for safe and efficient motion planning is the ability to map obstacle-free space in unknown, cluttered 3D environments. However, commodity-grade RGB-D cameras commonly used for sensing fail to register valid depth values on shiny, glossy, bright, or distant surfaces, leading to missing data in the map. To address this issue, we propose a framework leveraging probabilistic depth completion as an additional input for spatial mapping. We introduce a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images. Our pipeline exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping. Evaluations on synthetic data show that our approach maps significantly more correct free space with relatively low error when compared against using raw data alone in different indoor environments; thereby producing more complete maps that can be directly used for robotic navigation tasks. The performance of our framework is validated using real-world data.
KW - Computer vision
KW - Machine learning
KW - Mobile robots
KW - Simultaneous localisation and mapping
UR - http://www.scopus.com/inward/record.url?scp=85103784977&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3070308
DO - 10.1109/LRA.2021.3070308
M3 - Article
AN - SCOPUS:85103784977
SN - 2377-3766
VL - 6
SP - 5072
EP - 5079
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
M1 - 9392300
ER -