TY - GEN
T1 - Lightweight Semantic Mesh Mapping for Autonomous Vehicles
AU - Herb, Markus
AU - Weiherer, Tobias
AU - Navab, Nassir
AU - Tombari, Federico
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Lightweight and semantically meaningful environment maps are crucial for many applications in robotics and autonomous driving to facilitate higher-level tasks such as navigation and planning. In this paper we present a novel approach to incrementally build a meaningful and lightweight semantic map directly as a 3D mesh from a monocular or stereo sequence. Our system leverages existing feature-based visual odometry paired with learned depth prediction and semantic image segmentation to identify and reconstruct semantically relevant environment structure. We introduce a probabilistic fusion scheme to incrementally refine and extend a 3D mesh with semantic labels for each face without intermediate voxel-based fusion. To demonstrate its effectiveness, we evaluate our system in outdoor driving scenarios with monocular depth prediction and stereo and present quantitative and qualitative reconstruction results with comparison to ground truth. Our results show that the proposed approach achieves reconstruction quality comparable to current state-of-the-art voxel-based methods while being much more lightweight both in storage and computation.
AB - Lightweight and semantically meaningful environment maps are crucial for many applications in robotics and autonomous driving to facilitate higher-level tasks such as navigation and planning. In this paper we present a novel approach to incrementally build a meaningful and lightweight semantic map directly as a 3D mesh from a monocular or stereo sequence. Our system leverages existing feature-based visual odometry paired with learned depth prediction and semantic image segmentation to identify and reconstruct semantically relevant environment structure. We introduce a probabilistic fusion scheme to incrementally refine and extend a 3D mesh with semantic labels for each face without intermediate voxel-based fusion. To demonstrate its effectiveness, we evaluate our system in outdoor driving scenarios with monocular depth prediction and stereo and present quantitative and qualitative reconstruction results with comparison to ground truth. Our results show that the proposed approach achieves reconstruction quality comparable to current state-of-the-art voxel-based methods while being much more lightweight both in storage and computation.
UR - http://www.scopus.com/inward/record.url?scp=85125438762&partnerID=8YFLogxK
U2 - 10.1109/ICRA48506.2021.9560996
DO - 10.1109/ICRA48506.2021.9560996
M3 - Conference contribution
AN - SCOPUS:85125438762
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 6732
EP - 6738
BT - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
Y2 - 30 May 2021 through 5 June 2021
ER -