Abstract
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent it generates an almost uncorrupted image. This intriguing phenomena enables state-of-the-art CNN-based denoising and regularization of other inverse problems. In this paper we attribute this effect to a particular architectural choice of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. Our proof relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 2020 |
Veranstaltung | 8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Äthiopien Dauer: 30 Apr. 2020 → … |
Konferenz
Konferenz | 8th International Conference on Learning Representations, ICLR 2020 |
---|---|
Land/Gebiet | Äthiopien |
Ort | Addis Ababa |
Zeitraum | 30/04/20 → … |