TY - GEN
T1 - Complete Fetal Head Compounding from Multi-view 3D Ultrasound
AU - Wright, Robert
AU - Toussaint, Nicolas
AU - Gomez, Alberto
AU - Zimmer, Veronika
AU - Khanal, Bishesh
AU - Matthew, Jacqueline
AU - Skelton, Emily
AU - Kainz, Bernhard
AU - Rueckert, Daniel
AU - Hajnal, Joseph V.
AU - Schnabel, Julia A.
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - Ultrasound (US) images suffer from artefacts which limit its diagnostic value, notably acoustic shadow. Shadows are dependent on probe orientation, with each view giving a distinct, partial view of the anatomy. In this work, we fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent compounding of the full anatomy. Firstly, a stream of freehand 3D US images is acquired, capturing as many different views as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using an iterative spatial transformer network (iSTN), making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the best (most salient) features from all images, producing a more detailed compounding. Finally, the compounding is iteratively refined using a groupwise registration approach. We evaluate our compounding approach quantitatively and qualitatively, comparing it with average compounding and individual US frames. We also evaluate our alignment accuracy using two physically attached probes, that capture separate views simultaneously, providing ground-truth. Lastly, we demonstrate the potential clinical impact of our method for assessing cranial, facial and external ear abnormalities, with automated atlas-based masking and 3D volume rendering.
AB - Ultrasound (US) images suffer from artefacts which limit its diagnostic value, notably acoustic shadow. Shadows are dependent on probe orientation, with each view giving a distinct, partial view of the anatomy. In this work, we fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent compounding of the full anatomy. Firstly, a stream of freehand 3D US images is acquired, capturing as many different views as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using an iterative spatial transformer network (iSTN), making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the best (most salient) features from all images, producing a more detailed compounding. Finally, the compounding is iteratively refined using a groupwise registration approach. We evaluate our compounding approach quantitatively and qualitatively, comparing it with average compounding and individual US frames. We also evaluate our alignment accuracy using two physically attached probes, that capture separate views simultaneously, providing ground-truth. Lastly, we demonstrate the potential clinical impact of our method for assessing cranial, facial and external ear abnormalities, with automated atlas-based masking and 3D volume rendering.
UR - http://www.scopus.com/inward/record.url?scp=85075657111&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-32248-9_43
DO - 10.1007/978-3-030-32248-9_43
M3 - Conference contribution
AN - SCOPUS:85075657111
SN - 9783030322472
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 384
EP - 392
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
A2 - Shen, Dinggang
A2 - Yap, Pew-Thian
A2 - Liu, Tianming
A2 - Peters, Terry M.
A2 - Khan, Ali
A2 - Staib, Lawrence H.
A2 - Essert, Caroline
A2 - Zhou, Sean
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
Y2 - 13 October 2019 through 17 October 2019
ER -