TY - GEN
T1 - IR-FRestormer
T2 - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
AU - Darestani, Mohammad Zalbagi
AU - Nath, Vishwesh
AU - Li, Wenqi
AU - He, Yufan
AU - Roth, Holger R.
AU - Xu, Ziyue
AU - Xu, Daguang
AU - Heckel, Reinhard
AU - Zhao, Can
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Accelerated magnetic resonance imaging (MRI) aims to reconstruct high-quality MR images from a set of under-sampled measurements. State-of-the-art methods for this task use deep learning, which offers high reconstruction accuracy and fast runtimes. In this work, we propose a new state-of-the-art reconstruction model for accelerated MRI reconstruction. Our model is the first to combine the power of deep neural networks with iterative refinement for this task. For the neural network component of our method, we utilize a transformer-based architecture as transformers are state-of-the-art in various image reconstruction tasks. However, a major drawback of transformers which has limited their emergence among the state-of-the-art MRI models is that they are often memory inefficient for high-resolution inputs. To address this limitation, we propose a transformer-based model which uses parameter-free Fourier-based attention modules, achieving 2× more memory efficiency. We evaluate our model on the largest publicly available MRI dataset, the fastMRI dataset [46], and achieve on-par performance with other state-of-the-art1 methods on the dataset's leaderboard https://fastmri.org/leaderboards/.
AB - Accelerated magnetic resonance imaging (MRI) aims to reconstruct high-quality MR images from a set of under-sampled measurements. State-of-the-art methods for this task use deep learning, which offers high reconstruction accuracy and fast runtimes. In this work, we propose a new state-of-the-art reconstruction model for accelerated MRI reconstruction. Our model is the first to combine the power of deep neural networks with iterative refinement for this task. For the neural network component of our method, we utilize a transformer-based architecture as transformers are state-of-the-art in various image reconstruction tasks. However, a major drawback of transformers which has limited their emergence among the state-of-the-art MRI models is that they are often memory inefficient for high-resolution inputs. To address this limitation, we propose a transformer-based model which uses parameter-free Fourier-based attention modules, achieving 2× more memory efficiency. We evaluate our model on the largest publicly available MRI dataset, the fastMRI dataset [46], and achieve on-par performance with other state-of-the-art1 methods on the dataset's leaderboard https://fastmri.org/leaderboards/.
KW - Applications
KW - Biomedical / healthcare / medicine
UR - http://www.scopus.com/inward/record.url?scp=85192025299&partnerID=8YFLogxK
U2 - 10.1109/WACV57701.2024.00748
DO - 10.1109/WACV57701.2024.00748
M3 - Conference contribution
AN - SCOPUS:85192025299
T3 - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
SP - 7640
EP - 7649
BT - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 January 2024 through 8 January 2024
ER -