Robust and resource efficient identification of shallow neural networks by fewest samples

Massimo Fornasier, Jan Vybíral, Ingrid Daubechies

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

We address the structure identification and the uniform approximation of sums of ridge functions f(x)=∑ i=1m gi(ai,x) on Rd, representing a general form of a shallow feed-forward neural network, from a small number of query samples. Higher order differentiation, as used in our constructive approximations, of sums of ridge functions or of their compositions, as in deeper neural network, yields a natural connection between neural network weight identification and tensor product decomposition identification. In the case of the shallowest feed-forward neural network, second-order differentiation and tensors of order two (i.e., matrices) suffice as we prove in this paper. We use two sampling schemes to perform approximate differentiation-active sampling, where the sampling points are universal, actively and randomly designed, and passive sampling, where sampling points were preselected at random from a distribution with known density. Based on multiple gathered approximated first-and second-order differentials, our general approximation strategy is developed as a sequence of algorithms to perform individual sub-tasks. We first perform an active subspace search by approximating the span of the weight vectors a_1, a m. Then we use a straightforward substitution, which reduces the dimensionality of the problem from d to m. The core of the construction is then the stable and efficient approximation of weights expressed in terms of rank-1 matrices a_i otimes a_i, realized by formulating their individual identification as a suitable nonlinear program. We prove the successful identification by this program of weight vectors being close to orthonormal and we also show how we can constructively reduce to this case by a whitening procedure, without loss of any generality. We finally discuss the implementation and the performance of the proposed algorithmic pipeline with extensive numerical experiments, which illustrate and confirm the theoretical results.

Original languageEnglish
Pages (from-to)625-695
Number of pages71
JournalInformation and Inference
Volume10
Issue number2
DOIs
StatePublished - 1 Jun 2021

Keywords

  • breaking the curse of dimensionality
  • nonlinear programming for optimizations in matrix subspaces
  • randomized algorithms
  • training shallow neural networks
  • whitening

Fingerprint

Dive into the research topics of 'Robust and resource efficient identification of shallow neural networks by fewest samples'. Together they form a unique fingerprint.

Cite this