Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction With Iteration-Dependent Targets

Guillaume Corda-D'incan, Julia A. Schnabel, Andrew J. Reader

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

We propose a new version of the forward-backward splitting expectation-maximization network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3-D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximization algorithm and replaces the regularization step by a residual convolutional neural network. Both the gradient of the prior and the regularization strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customized regularization at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularized reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularization step as a leapfrogging acceleration. The results obtained on 2-D and 3-D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimization of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalization capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularization allowed a slight reduction in reconstruction error compared to using a fixed regularization network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3-D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network's performance compared to conventional training with a full backpropagation through the entire network.

Original languageEnglish
Pages (from-to)552-563
Number of pages12
JournalIEEE Transactions on Radiation and Plasma Medical Sciences
Volume6
Issue number5
DOIs
StatePublished - 1 May 2022
Externally publishedYes

Keywords

  • Deep learning
  • model-based image reconstruction (MBIR)
  • positron emission tomography (PET) reconstruction

Fingerprint

Dive into the research topics of 'Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction With Iteration-Dependent Targets'. Together they form a unique fingerprint.

Cite this