Abstract
Direct SLAM methods have shown exceptional performance on odometry tasks. However, they are susceptible to dynamic lighting and weather changes while also suffering from a bad initialization on large baselines. To overcome this, we propose GN-Net: a network optimized with the novel Gauss-Newton loss for training weather invariant deep features, tailored for direct image alignment. Our network can be trained with pixel correspondences between images taken from different sequences. Experiments on both simulated and real-world datasets demonstrate that our approach is more robust against bad initialization, variations in day-time, and weather changes thereby outperforming state-of-the-art direct and indirect methods. Furthermore, we release an evaluation benchmark for relocalization tracking against different types of weather. Our benchmark is available at https://vision.in.tum.de/gn-net.
Original language | English |
---|---|
Article number | 8954808 |
Pages (from-to) | 890-897 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 5 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2020 |
Keywords
- Localization
- SLAM
- visual learning