Abstract
This work represents a large step into modern ways of fast 3D reconstruction based on RGB camera images. Utilizing a Microsoft HoloLens 2 as a multisensor platform that includes an RGB camera and an inertial measurement unit for SLAM-based camera-pose determination, we train a Neural Radiance Field (NeRF) as a neural scene representation in real-time with the acquired data from the HoloLens. The HoloLens is connected via Wifi to a high-performance PC that is responsible for the training and 3D reconstruction. After the data stream ends, the training is stopped and the 3D reconstruction is initiated, which extracts a point cloud of the scene. With our specialized inference algorithm, five million scene points can be extracted within 1 second. In addition, the point cloud also includes radiometry per point. Our method of 3D reconstruction outperforms grid point sampling with NeRFs by multiple orders of magnitude and can be regarded as a complete real-time 3D reconstruction method in a mobile mapping setup.
Original language | English |
---|---|
Pages (from-to) | 167-174 |
Number of pages | 8 |
Journal | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives |
Volume | 48 |
Issue number | 1/W1-2023 |
DOIs | |
State | Published - 25 May 2023 |
Externally published | Yes |
Event | 12th International Symposium on Mobile Mapping Technology, MMT 2023 - Padua, Italy Duration: 24 May 2023 → 26 May 2023 |
Keywords
- Fast 3D Reconstruction
- HoloLens
- Machine Vision
- Mobile Mapping
- Neural Radiance Fields
- Real-Time