TY - GEN
T1 - CASHformer
T2 - 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
AU - Sarasua, Ignacio
AU - Pölsterl, Sebastian
AU - Wachinger, Christian
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Modeling temporal changes in subcortical structures is crucial for a better understanding of the progression of Alzheimer’s disease (AD). Given their flexibility to adapt to heterogeneous sequence lengths, mesh-based transformer architectures have been proposed in the past for predicting hippocampus deformations across time. However, one of the main limitations of transformers is the large amount of trainable parameters, which makes the application on small datasets very challenging. In addition, current methods do not include relevant non-image information that can help to identify AD-related patterns in the progression. To this end, we introduce CASHformer, a transformer-based framework to model longitudinal shape trajectories in AD. CASHformer incorporates the idea of pre-trained transformers as universal compute engines that generalize across a wide range of tasks by freezing most layers during fine-tuning. This reduces the number of parameters by over 90% with respect to the original model and therefore enables the application of large models on small datasets without overfitting. In addition, CASHformer models cognitive decline to reveal AD atrophy patterns in the temporal sequence. Our results show that CASHformer reduces the reconstruction error by 73 % compared to previously proposed methods. Moreover, the accuracy of detecting patients progressing to AD increases by 3 % with imputing missing longitudinal shape data.
AB - Modeling temporal changes in subcortical structures is crucial for a better understanding of the progression of Alzheimer’s disease (AD). Given their flexibility to adapt to heterogeneous sequence lengths, mesh-based transformer architectures have been proposed in the past for predicting hippocampus deformations across time. However, one of the main limitations of transformers is the large amount of trainable parameters, which makes the application on small datasets very challenging. In addition, current methods do not include relevant non-image information that can help to identify AD-related patterns in the progression. To this end, we introduce CASHformer, a transformer-based framework to model longitudinal shape trajectories in AD. CASHformer incorporates the idea of pre-trained transformers as universal compute engines that generalize across a wide range of tasks by freezing most layers during fine-tuning. This reduces the number of parameters by over 90% with respect to the original model and therefore enables the application of large models on small datasets without overfitting. In addition, CASHformer models cognitive decline to reveal AD atrophy patterns in the temporal sequence. Our results show that CASHformer reduces the reconstruction error by 73 % compared to previously proposed methods. Moreover, the accuracy of detecting patients progressing to AD increases by 3 % with imputing missing longitudinal shape data.
UR - http://www.scopus.com/inward/record.url?scp=85138835795&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16431-6_5
DO - 10.1007/978-3-031-16431-6_5
M3 - Conference contribution
AN - SCOPUS:85138835795
SN - 9783031164309
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 44
EP - 54
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings
A2 - Wang, Linwei
A2 - Dou, Qi
A2 - Fletcher, P. Thomas
A2 - Speidel, Stefanie
A2 - Li, Shuo
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 18 September 2022 through 22 September 2022
ER -