TY - JOUR
T1 - 3DTINC
T2 - Time-Equivariant Non-Contrastive Learning for Predicting Disease Progression From Longitudinal OCTs
AU - Emre, Taha
AU - Chakravarty, Arunava
AU - Rivail, Antoine
AU - Lachinov, Dmitrii
AU - Leingang, Oliver
AU - Riedl, Sophie
AU - Mai, Julia
AU - Scholl, Hendrik P.N.
AU - Sivaprasad, Sobha
AU - Rueckert, Daniel
AU - Lotery, Andrew
AU - Schmidt-Erfurth, Ursula
AU - Bogunovic, Hrvoje
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Self-supervised learning (SSL) has emerged as a powerful technique for improving the efficiency and effectiveness of deep learning models. Contrastive methods are a prominent family of SSL that extract similar representations of two augmented views of an image while pushing away others in the representation space as negatives. However, the state-of-the-art contrastive methods require large batch sizes and augmentations designed for natural images that are impractical for 3D medical images. To address these limitations, we propose a new longitudinal SSL method, 3DTINC, based on non-contrastive learning. It is designed to learn perturbation-invariant features for 3D optical coherence tomography (OCT) volumes, using augmentations specifically designed for OCT. We introduce a new non-contrastive similarity loss term that learns temporal information implicitly from intra-patient scans acquired at different times. Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD). After pretraining with 3DTINC, we evaluated the learned representations and the prognostic models on two large-scale longitudinal datasets of retinal OCTs where we predict the conversion to wet-AMD within a six-month interval. Our results demonstrate that each component of our contributions is crucial for learning meaningful representations useful in predicting disease progression from longitudinal volumetric scans.
AB - Self-supervised learning (SSL) has emerged as a powerful technique for improving the efficiency and effectiveness of deep learning models. Contrastive methods are a prominent family of SSL that extract similar representations of two augmented views of an image while pushing away others in the representation space as negatives. However, the state-of-the-art contrastive methods require large batch sizes and augmentations designed for natural images that are impractical for 3D medical images. To address these limitations, we propose a new longitudinal SSL method, 3DTINC, based on non-contrastive learning. It is designed to learn perturbation-invariant features for 3D optical coherence tomography (OCT) volumes, using augmentations specifically designed for OCT. We introduce a new non-contrastive similarity loss term that learns temporal information implicitly from intra-patient scans acquired at different times. Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD). After pretraining with 3DTINC, we evaluated the learned representations and the prognostic models on two large-scale longitudinal datasets of retinal OCTs where we predict the conversion to wet-AMD within a six-month interval. Our results demonstrate that each component of our contributions is crucial for learning meaningful representations useful in predicting disease progression from longitudinal volumetric scans.
KW - Self-supervised learning
KW - contrastive learning
KW - disease progression
KW - longitudinal imaging
KW - optical coherence tomography
KW - retina
UR - http://www.scopus.com/inward/record.url?scp=85191289356&partnerID=8YFLogxK
U2 - 10.1109/TMI.2024.3391215
DO - 10.1109/TMI.2024.3391215
M3 - Article
AN - SCOPUS:85191289356
SN - 0278-0062
VL - 43
SP - 3200
EP - 3210
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 9
ER -