Abstract
Lidar became an important component of the perception systems in autonomous driving. But challenges of training data acquisition and annotation made emphasized the role of the sensor to sensor domain adaptation. In this letter, we address the problem of lidar upsampling. Learning on lidar point clouds is rather a challenging task due to their irregular and sparse structure. Here we propose a method for lidar point cloud upsampling which can reconstruct fine-grained lidar scan patterns. The key idea is to utilize edge-aware dense convolutions for both feature extraction and feature expansion. Additionally applying a more accurate Sliced Wasserstein Distance facilitates learning of the fine lidar sweep structures. This in turn enables our method to employ a one-stage upsampling paradigm without the need for coarse and fine reconstruction. We conduct several experiments to evaluate our method and demonstrate that it provides better upsampling.
Original language | English |
---|---|
Pages (from-to) | 392-399 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 8 |
Issue number | 1 |
DOIs | |
State | Published - 1 Jan 2023 |
Keywords
- Autonomous vehicle navigation
- deep learning for visual perception
- lidar
- transfer learning