TY - GEN
T1 - Multi-Modal Unsupervised Brain Image Registration Using Edge Maps
AU - Sideri-Lampretsa, Vasiliki
AU - Kaissis, Georgios
AU - Rueckert, Daniel
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Diffeomorphic deformable multi-modal image registration is a challenging task which aims to bring images acquired by different modalities to the same coordinate space and at the same time to preserve the topology and the invertibility of the transformation. Recent research has focused on leveraging deep learning approaches for this task as these have been shown to achieve competitive registration accuracy while being computationally more efficient than traditional iterative registration methods. In this work, we propose a simple yet effective unsupervised deep learning-based multi-modal image registration approach that benefits from auxiliary information coming from the gradient magnitude of the image, i.e. the image edges, during the training. The intuition behind this is that image locations with a strong gradient are assumed to denote a transition of tissues, which are locations of high information value able to act as a geometry constraint. The task is similar to using segmentation maps to drive the training, but the edge maps are easier and faster to acquire and do not require annotations. We evaluate our approach in the context of registering multi-modal (T1w to T2w) magnetic resonance (MR) brain images of different subjects using three different loss functions that are said to assist multi-modal registration, showing that in all cases the auxiliary information leads to better results without compromising the runtime.
AB - Diffeomorphic deformable multi-modal image registration is a challenging task which aims to bring images acquired by different modalities to the same coordinate space and at the same time to preserve the topology and the invertibility of the transformation. Recent research has focused on leveraging deep learning approaches for this task as these have been shown to achieve competitive registration accuracy while being computationally more efficient than traditional iterative registration methods. In this work, we propose a simple yet effective unsupervised deep learning-based multi-modal image registration approach that benefits from auxiliary information coming from the gradient magnitude of the image, i.e. the image edges, during the training. The intuition behind this is that image locations with a strong gradient are assumed to denote a transition of tissues, which are locations of high information value able to act as a geometry constraint. The task is similar to using segmentation maps to drive the training, but the edge maps are easier and faster to acquire and do not require annotations. We evaluate our approach in the context of registering multi-modal (T1w to T2w) magnetic resonance (MR) brain images of different subjects using three different loss functions that are said to assist multi-modal registration, showing that in all cases the auxiliary information leads to better results without compromising the runtime.
KW - deep-learning registration
KW - gradient magnitude
KW - inter-subject
KW - multi-modal registration
KW - unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85129617355&partnerID=8YFLogxK
U2 - 10.1109/ISBI52829.2022.9761637
DO - 10.1109/ISBI52829.2022.9761637
M3 - Conference contribution
AN - SCOPUS:85129617355
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - ISBI 2022 - Proceedings
PB - IEEE Computer Society
T2 - 19th IEEE International Symposium on Biomedical Imaging, ISBI 2022
Y2 - 28 March 2022 through 31 March 2022
ER -