TY - JOUR
T1 - A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors
AU - Huang, Wen
AU - Hand, Paul
AU - Heckel, Reinhard
AU - Voroninski, Vladislav
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2021/4
Y1 - 2021/4
N2 - Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors’ knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.
AB - Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors’ knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.
KW - Compressive sensing
KW - Convergence analysis
KW - Generative models
KW - Gradient descent
UR - http://www.scopus.com/inward/record.url?scp=85102531801&partnerID=8YFLogxK
U2 - 10.1007/s00041-021-09830-5
DO - 10.1007/s00041-021-09830-5
M3 - Article
AN - SCOPUS:85102531801
SN - 1069-5869
VL - 27
JO - Journal of Fourier Analysis and Applications
JF - Journal of Fourier Analysis and Applications
IS - 2
M1 - 19
ER -