TY - GEN
T1 - Self-Supervised Pretext Tasks in Model Robustness & Generalizability
T2 - 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2022
AU - Navarro, Fernando
AU - Watanabe, Christopher
AU - Shit, Suprosanna
AU - Sekuboyina, Anjany
AU - Peeken, Jan C.
AU - Combs, Stephanie E.
AU - Menze, Bjoern H.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Self-supervised pretext tasks have been introduced as an effective strategy when learning target tasks on small annotated data sets. However, while current research focuses on exploring novel pretext tasks for meaningful and reusable representation learning for the target task, the study of its robustness and generalizability has remained relatively under-explored. Specifically, it is crucial in medical imaging to proactively investigate performance under different perturbations for reliable deployment of clinical applications. In this work, we revisit medical imaging networks pre-trained with self-supervised learnings and categorically evaluate robustness and generalizability compared to vanilla supervised learning. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield conclusive results exposing the hidden benefits of self-supervision pre-training for learning robust feature representations.
AB - Self-supervised pretext tasks have been introduced as an effective strategy when learning target tasks on small annotated data sets. However, while current research focuses on exploring novel pretext tasks for meaningful and reusable representation learning for the target task, the study of its robustness and generalizability has remained relatively under-explored. Specifically, it is crucial in medical imaging to proactively investigate performance under different perturbations for reliable deployment of clinical applications. In this work, we revisit medical imaging networks pre-trained with self-supervised learnings and categorically evaluate robustness and generalizability compared to vanilla supervised learning. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield conclusive results exposing the hidden benefits of self-supervision pre-training for learning robust feature representations.
KW - generalizability
KW - multi-organ segmentation
KW - pneumonia classification
KW - robustness
KW - self-supervision
UR - http://www.scopus.com/inward/record.url?scp=85138127933&partnerID=8YFLogxK
U2 - 10.1109/EMBC48229.2022.9870911
DO - 10.1109/EMBC48229.2022.9870911
M3 - Conference contribution
C2 - 36086344
AN - SCOPUS:85138127933
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 5074
EP - 5079
BT - 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 July 2022 through 15 July 2022
ER -