ResNet with one-neuron hidden layers is a Universal Approximator

Hongzhou Lin, Stefanie Jegelka

Research output: Contribution to journalConference articlepeer-review

124 Scopus citations

Abstract

We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. ℓ1(Rd). Due to the identity mapping inherent to ResNets, our network has alternating layers of dimension one and d. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension d [21, 11]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture.

Original languageEnglish
Pages (from-to)6169-6178
Number of pages10
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - 2018
Externally publishedYes
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: 2 Dec 20188 Dec 2018

Fingerprint

Dive into the research topics of 'ResNet with one-neuron hidden layers is a Universal Approximator'. Together they form a unique fingerprint.

Cite this