TY - GEN
T1 - Data Efficient Unsupervised Domain Adaptation For Cross-modality Image Segmentation
AU - Ouyang, Cheng
AU - Kamnitsas, Konstantinos
AU - Biffi, Carlo
AU - Duan, Jinming
AU - Rueckert, Daniel
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - Deep learning models trained on medical images from a source domain (e.g. imaging modality) often fail when deployed on images from a different target domain, despite imaging common anatomical structures. Deep unsupervised domain adaptation (UDA) aims to improve the performance of a deep neural network model on a target domain, using solely unlabelled target domain data and labelled source domain data. However, current state-of-the-art methods exhibit reduced performance when target data is scarce. In this work, we introduce a new data efficient UDA method for multi-domain medical image segmentation. The proposed method combines a novel VAE-based feature prior matching, which is data-efficient, and domain adversarial training to learn a shared domain-invariant latent space which is exploited during segmentation. Our method is evaluated on a public multi-modality cardiac image segmentation dataset by adapting from the labelled source domain (3D MRI) to the unlabelled target domain (3D CT). We show that by using only one single unlabelled 3D CT scan, the proposed architecture outperforms the state-of-the-art in the same setting. Finally, we perform ablation studies on prior matching and domain adversarial training to shed light on the theoretical grounding of the proposed method.
AB - Deep learning models trained on medical images from a source domain (e.g. imaging modality) often fail when deployed on images from a different target domain, despite imaging common anatomical structures. Deep unsupervised domain adaptation (UDA) aims to improve the performance of a deep neural network model on a target domain, using solely unlabelled target domain data and labelled source domain data. However, current state-of-the-art methods exhibit reduced performance when target data is scarce. In this work, we introduce a new data efficient UDA method for multi-domain medical image segmentation. The proposed method combines a novel VAE-based feature prior matching, which is data-efficient, and domain adversarial training to learn a shared domain-invariant latent space which is exploited during segmentation. Our method is evaluated on a public multi-modality cardiac image segmentation dataset by adapting from the labelled source domain (3D MRI) to the unlabelled target domain (3D CT). We show that by using only one single unlabelled 3D CT scan, the proposed architecture outperforms the state-of-the-art in the same setting. Finally, we perform ablation studies on prior matching and domain adversarial training to shed light on the theoretical grounding of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=85075691914&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-32245-8_74
DO - 10.1007/978-3-030-32245-8_74
M3 - Conference contribution
AN - SCOPUS:85075691914
SN - 9783030322441
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 669
EP - 677
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
A2 - Shen, Dinggang
A2 - Yap, Pew-Thian
A2 - Liu, Tianming
A2 - Peters, Terry M.
A2 - Khan, Ali
A2 - Staib, Lawrence H.
A2 - Essert, Caroline
A2 - Zhou, Sean
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
Y2 - 13 October 2019 through 17 October 2019
ER -