Abstract
In recency, neural signed distance fields have become more popular for reconstructing 3D indoor environments. While great improvements have been made due to missing incident radiance and materials in the surface estimation, current methods cannot reconstruct high-quality surfaces. To address this issue, we propose Neural Painted Radiosity Fields (NPRF), consisting of Neural Radiosity Fields for volumetric surface representation and Neural Painted Scenes for novel view synthesis. Neural Radiosity Fields combine the radiative transfer equation with neural radiosity to estimate 3D surfaces, thus leveraging raytracing to improve the volumetric representation. Neural Painted Scenes employs sparsification and projection of 3D points into 2D images in conjunction with a generative, context-aware inpainting network to produce high-quality novel views. We show that NPRF leads to overall improvements in F-score on the popular ScanNet dataset. Finally, we show that NPRF improves novel view synthesis by a significant margin, giving improvements of up to 25% on PSNR, 53% on LPIPS, and 3% on SSIM.
Original language | English |
---|---|
Pages (from-to) | 2865-2869 |
Number of pages | 5 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of Duration: 14 Apr 2024 → 19 Apr 2024 |
Keywords
- 3D reconstruction
- deep learning
- neural radiance fields
- novel view synthesis
- volumetric representation