TY - GEN
T1 - The Effect of the Loss on Generalization
T2 - 4th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2020 and 1st International Workshop on Topological Data Analysis and Its Applications for Medical Data, TDA4MedicalData 2021 held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
AU - Baltatzis, Vasileios
AU - Le Folgoc, Loïc
AU - Ellis, Sam
AU - Manzanera, Octavio E.Martinez
AU - Bintsi, Kyriaki Margarita
AU - Nair, Arjun
AU - Desai, Sujal
AU - Glocker, Ben
AU - Schnabel, Julia A.
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Convolutional Neural Networks (CNNs) are widely used for image classification in a variety of fields, including medical imaging. While most studies deploy cross-entropy as the loss function in such tasks, a growing number of approaches have turned to a family of contrastive learning-based losses. Even though performance metrics such as accuracy, sensitivity and specificity are regularly used for the evaluation of CNN classifiers, the features that these classifiers actually learn are rarely identified and their effect on the classification performance on out-of-distribution test samples is insufficiently explored. In this paper, motivated by the real-world task of lung nodule classification, we investigate the features that a CNN learns when trained and tested on different distributions of a synthetic dataset with controlled modes of variation. We show that different loss functions lead to different features being learned and consequently affect the generalization ability of the classifier on unseen data. This study provides some important insights into the design of deep learning solutions for medical imaging tasks.
AB - Convolutional Neural Networks (CNNs) are widely used for image classification in a variety of fields, including medical imaging. While most studies deploy cross-entropy as the loss function in such tasks, a growing number of approaches have turned to a family of contrastive learning-based losses. Even though performance metrics such as accuracy, sensitivity and specificity are regularly used for the evaluation of CNN classifiers, the features that these classifiers actually learn are rarely identified and their effect on the classification performance on out-of-distribution test samples is insufficiently explored. In this paper, motivated by the real-world task of lung nodule classification, we investigate the features that a CNN learns when trained and tested on different distributions of a synthetic dataset with controlled modes of variation. We show that different loss functions lead to different features being learned and consequently affect the generalization ability of the classifier on unseen data. This study provides some important insights into the design of deep learning solutions for medical imaging tasks.
KW - Contrastive learning
KW - Distribution shift
KW - Interpretability
UR - http://www.scopus.com/inward/record.url?scp=85115859081&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-87444-5_6
DO - 10.1007/978-3-030-87444-5_6
M3 - Conference contribution
AN - SCOPUS:85115859081
SN - 9783030874438
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 56
EP - 64
BT - Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data - 4th International Workshop, iMIMIC 2021, and 1st International Workshop, TDA4MedicalData 2021, Held in Conjunction with MICCAI 2021, Proceedings
A2 - Reyes, Mauricio
A2 - Henriques Abreu, Pedro
A2 - Cardoso, Jaime
A2 - Hajij, Mustafa
A2 - Zamzmi, Ghada
A2 - Rahul, Paul
A2 - Thakur, Lokendra
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 27 September 2021 through 27 September 2021
ER -