Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks

Dominik Schnaus, Jongseok Lee, Daniel Cremers, Rudolph Triebel

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

Abstract

In this work, we propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks. The key idea is to exploit scalable and structured posteriors of neural networks as informative priors with generalization guarantees. Our learned priors provide expressive probabilistic representations at large scale, like Bayesian counterparts of pretrained models on ImageNet, and further produce non-vacuous generalization bounds. We also extend this idea to a continual learning framework, where the favorable properties of our priors are desirable. Major enablers are our technical contributions: (1) the sums-of-Kronecker-product computations, and (2) the derivations and optimizations of tractable objectives that lead to improved generalization bounds. Empirically, we exhaustively show the effectiveness of this method for uncertainty estimation and generalization.

OriginalspracheEnglisch
Seiten (von - bis)30252-30284
Seitenumfang33
FachzeitschriftProceedings of Machine Learning Research
Jahrgang202
PublikationsstatusVeröffentlicht - 2023
Veranstaltung40th International Conference on Machine Learning, ICML 2023 - Honolulu, USA/Vereinigte Staaten
Dauer: 23 Juli 202329 Juli 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren