TY - GEN
T1 - Automatic Re-orientation of 3D Echocardiographic Images in Virtual Reality Using Deep Learning
AU - Munroe, Lindsay
AU - Sajith, Gina
AU - Lin, Ei
AU - Bhattacharya, Surjava
AU - Pushparajah, Kuberan
AU - Simpson, John
AU - Schnabel, Julia A.
AU - Wheeler, Gavin
AU - Gomez, Alberto
AU - Deng, Shujie
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation – a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to re-orient an image to the standard apical four-chamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the re-oriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy – 2.5D classification of discrete integer angles – achieved a mean absolute angle error on the test set of 9.0∘. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.
AB - In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation – a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to re-orient an image to the standard apical four-chamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the re-oriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy – 2.5D classification of discrete integer angles – achieved a mean absolute angle error on the test set of 9.0∘. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.
KW - 3D echocardiography
KW - Deep learning
KW - Virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85112241013&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-80432-9_14
DO - 10.1007/978-3-030-80432-9_14
M3 - Conference contribution
AN - SCOPUS:85112241013
SN - 9783030804312
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 177
EP - 188
BT - Medical Image Understanding and Analysis - 25th Annual Conference, MIUA 2021, Proceedings
A2 - Papież, Bartłomiej W.
A2 - Yaqub, Mohammad
A2 - Jiao, Jianbo
A2 - Namburete, Ana I.
A2 - Noble, J. Alison
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th Annual Conference on Medical Image Understanding and Analysis, MIUA 2021
Y2 - 12 July 2021 through 14 July 2021
ER -