TY - JOUR
T1 - E-NeRF
T2 - Neural Radiance Fields From a Moving Event Camera
AU - Klenk, Simon
AU - Koestler, Lukas
AU - Scaramuzza, Davide
AU - Cremers, Daniel
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Estimating neural radiance fields (NeRFs) from 'ideal' images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera. Our method can recover NeRFs during very fast motion and in high-dynamic-range conditions where frame-based approaches fail. We show that rendering high-quality frames is possible by only providing an event stream as input. Furthermore, by combining events and frames, we can estimate NeRFs of higher quality than state-of-the-art approaches under severe motion blur. We also show that combining events and frames can overcome failure cases of NeRF estimation in scenarios where only a few input views are available without requiring additional regularization.
AB - Estimating neural radiance fields (NeRFs) from 'ideal' images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera. Our method can recover NeRFs during very fast motion and in high-dynamic-range conditions where frame-based approaches fail. We show that rendering high-quality frames is possible by only providing an event stream as input. Furthermore, by combining events and frames, we can estimate NeRFs of higher quality than state-of-the-art approaches under severe motion blur. We also show that combining events and frames can overcome failure cases of NeRF estimation in scenarios where only a few input views are available without requiring additional regularization.
KW - Mapping
KW - deep learning methods
KW - event cameras
UR - http://www.scopus.com/inward/record.url?scp=85148432951&partnerID=8YFLogxK
U2 - 10.1109/LRA.2023.3240646
DO - 10.1109/LRA.2023.3240646
M3 - Article
AN - SCOPUS:85148432951
SN - 2377-3766
VL - 8
SP - 1587
EP - 1594
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
ER -