TY - GEN
T1 - Transfer Learning for Brain Segmentation
T2 - 24th Annual Conference on Medical Image Understanding and Analysis, MIUA 2020
AU - Weatheritt, Jack
AU - Rueckert, Daniel
AU - Wolz, Robin
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Manual segmentations of anatomical regions in the brain are time consuming and costly to acquire. In a clinical trial setting, this is prohibitive and automated methods are needed for routine application. We propose a deep-learning architecture that automatically delineates sub-cortical regions in the brain (example biomarkers for monitoring the development of Huntington’s disease). Neural networks, despite typically reaching state-of-the-art performance, are sensitive to differing scanner protocols and pre-processing methods. To address this challenge, one can pre-train a model on an existing data set and then fine-tune this model using a small amount of labelled data from the target domain. This work investigates the impact of the pre-training task and the amount of data required via a systematic study. We show that use of just a few samples from the same task (but a different domain) can achieve state-of-the-art performance. Further, this pre-training task utilises automated labels, meaning the pipeline requires very few manually segmented data points. On the other hand, using a different task for pre-training is shown to be less successful. We then conclude, by showing that, whilst fine-tuning is very powerful for a specific data distribution, models developed in this fashion are considerably more fragile when used on completely unseen data.
AB - Manual segmentations of anatomical regions in the brain are time consuming and costly to acquire. In a clinical trial setting, this is prohibitive and automated methods are needed for routine application. We propose a deep-learning architecture that automatically delineates sub-cortical regions in the brain (example biomarkers for monitoring the development of Huntington’s disease). Neural networks, despite typically reaching state-of-the-art performance, are sensitive to differing scanner protocols and pre-processing methods. To address this challenge, one can pre-train a model on an existing data set and then fine-tune this model using a small amount of labelled data from the target domain. This work investigates the impact of the pre-training task and the amount of data required via a systematic study. We show that use of just a few samples from the same task (but a different domain) can achieve state-of-the-art performance. Further, this pre-training task utilises automated labels, meaning the pipeline requires very few manually segmented data points. On the other hand, using a different task for pre-training is shown to be less successful. We then conclude, by showing that, whilst fine-tuning is very powerful for a specific data distribution, models developed in this fashion are considerably more fragile when used on completely unseen data.
KW - Brain segmentation
KW - Deep learning
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85088577871&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-52791-4_10
DO - 10.1007/978-3-030-52791-4_10
M3 - Conference contribution
AN - SCOPUS:85088577871
SN - 9783030527907
T3 - Communications in Computer and Information Science
SP - 118
EP - 130
BT - Medical Image Understanding and Analysis - 24th Annual Conference, MIUA 2020, Proceedings
A2 - Papiez, Bartlomiej W.
A2 - Namburete, Ana I.L.
A2 - Yaqub, Mohammad
A2 - Noble, J. Alison
A2 - Yaqub, Mohammad
PB - Springer
Y2 - 15 July 2020 through 17 July 2020
ER -