TY - JOUR
T1 - Ultra-NeRF
T2 - 6th International Conference on Medical Imaging with Deep Learning, MIDL 2023
AU - Wysocki, Magdalena
AU - Azampour, Mohammad Farid
AU - Eilers, Christine
AU - Busam, Benjamin
AU - Salehi, Mehrdad
AU - Navab, Nassir
N1 - Publisher Copyright:
© 2023 CC-BY 4.0, M. Wysocki, M.F. Azampour, C. Eilers, B. Busam, M. Salehi & N. Navab.
PY - 2023
Y1 - 2023
N2 - We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR.
AB - We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR.
KW - implicit neural representation
KW - neural radiance fields
KW - ultrasound
UR - http://www.scopus.com/inward/record.url?scp=85187024300&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85187024300
SN - 2640-3498
VL - 227
SP - 382
EP - 401
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 10 July 2023 through 12 July 2023
ER -