TY - GEN
T1 - LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels
AU - Schmidt, Jonathan
AU - Khan, Qadeer
AU - Cremers, Daniel
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep learning models for self-driving cars require a diverse training dataset to manage critical driving scenarios on public roads safely. This includes having data from divergent trajectories, such as the oncoming traffic lane or sidewalks. Such data would be too dangerous to collect in the real world. Data augmentation approaches have been proposed to tackle this issue using RGB images. However, solutions based on LiDAR sensors are scarce. Therefore, we propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions. The LiDAR view synthesis is done using mesh reconstruction and ray casting. We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output. A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle. Our method neither requires expert driving labels for the original nor the synthesized LiDAR sequence. Instead, we infer labels from LiDAR odometry. We demonstrate the effectiveness of our approach in a comprehensive online evaluation and with a comparison to concurrent work. Our results show the importance of synthesizing additional LiDAR point clouds, particularly in terms of model robustness. Code and supplementary visualizations are available at: https://jonathsch.github.io/lidar-synthesis/
AB - Deep learning models for self-driving cars require a diverse training dataset to manage critical driving scenarios on public roads safely. This includes having data from divergent trajectories, such as the oncoming traffic lane or sidewalks. Such data would be too dangerous to collect in the real world. Data augmentation approaches have been proposed to tackle this issue using RGB images. However, solutions based on LiDAR sensors are scarce. Therefore, we propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions. The LiDAR view synthesis is done using mesh reconstruction and ray casting. We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output. A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle. Our method neither requires expert driving labels for the original nor the synthesized LiDAR sequence. Instead, we infer labels from LiDAR odometry. We demonstrate the effectiveness of our approach in a comprehensive online evaluation and with a comparison to concurrent work. Our results show the importance of synthesizing additional LiDAR point clouds, particularly in terms of model robustness. Code and supplementary visualizations are available at: https://jonathsch.github.io/lidar-synthesis/
UR - http://www.scopus.com/inward/record.url?scp=85186503916&partnerID=8YFLogxK
U2 - 10.1109/ITSC57777.2023.10422174
DO - 10.1109/ITSC57777.2023.10422174
M3 - Conference contribution
AN - SCOPUS:85186503916
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 2835
EP - 2842
BT - 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023
Y2 - 24 September 2023 through 28 September 2023
ER -