TY - GEN
T1 - Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction
AU - Bai, Wenjia
AU - Chen, Chen
AU - Tarroni, Giacomo
AU - Duan, Jinming
AU - Guitton, Florian
AU - Petersen, Steffen E.
AU - Guo, Yike
AU - Matthews, Paul M.
AU - Rueckert, Daniel
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.
AB - In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.
UR - http://www.scopus.com/inward/record.url?scp=85075687751&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-32245-8_60
DO - 10.1007/978-3-030-32245-8_60
M3 - Conference contribution
AN - SCOPUS:85075687751
SN - 9783030322441
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 541
EP - 549
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
A2 - Shen, Dinggang
A2 - Yap, Pew-Thian
A2 - Liu, Tianming
A2 - Peters, Terry M.
A2 - Khan, Ali
A2 - Staib, Lawrence H.
A2 - Essert, Caroline
A2 - Zhou, Sean
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
Y2 - 13 October 2019 through 17 October 2019
ER -