TY - JOUR
T1 - Efficient identification of wide shallow neural networks with biases
AU - Fornasier, Massimo
AU - Klock, Timo
AU - Mondelli, Marco
AU - Rauchensteiner, Michael
N1 - Publisher Copyright:
© 2025 The Author(s)
PY - 2025/6
Y1 - 2025/6
N2 - The identification of the parameters of a neural network from finite samples of input-output pairs is often referred to as the teacher-student model, and this model has represented a popular framework for understanding training and generalization. Even if the problem is NP-complete in the worst case, a rapidly growing literature – after adding suitable distributional assumptions – has established finite sample identification of two-layer networks with a number of neurons m=O(D), D being the input dimension. For the range D2 the problem becomes harder, and truly little is known for networks parametrized by biases as well. This paper fills the gap by providing efficient algorithms and rigorous theoretical guarantees of finite sample identification for such wider shallow networks with biases. Our approach is based on a two-step pipeline: first, we recover the direction of the weights, by exploiting second order information; next, we identify the signs by suitable algebraic evaluations, and we recover the biases by empirical risk minimization via gradient descent. Numerical results demonstrate the effectiveness of our approach.
AB - The identification of the parameters of a neural network from finite samples of input-output pairs is often referred to as the teacher-student model, and this model has represented a popular framework for understanding training and generalization. Even if the problem is NP-complete in the worst case, a rapidly growing literature – after adding suitable distributional assumptions – has established finite sample identification of two-layer networks with a number of neurons m=O(D), D being the input dimension. For the range D2 the problem becomes harder, and truly little is known for networks parametrized by biases as well. This paper fills the gap by providing efficient algorithms and rigorous theoretical guarantees of finite sample identification for such wider shallow networks with biases. Our approach is based on a two-step pipeline: first, we recover the direction of the weights, by exploiting second order information; next, we identify the signs by suitable algebraic evaluations, and we recover the biases by empirical risk minimization via gradient descent. Numerical results demonstrate the effectiveness of our approach.
UR - http://www.scopus.com/inward/record.url?scp=85217922576&partnerID=8YFLogxK
U2 - 10.1016/j.acha.2025.101749
DO - 10.1016/j.acha.2025.101749
M3 - Article
AN - SCOPUS:85217922576
SN - 1063-5203
VL - 77
JO - Applied and Computational Harmonic Analysis
JF - Applied and Computational Harmonic Analysis
M1 - 101749
ER -