Estimating model uncertainty of neural networks in sparse information form

Jongseok Lee, Matthias Humt, Jianxiang Feng, Rudolph Triebel

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

19 Zitate (Scopus)

Abstract

We present a sparse representation of model uncer_tainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the informa_tion form. The key insight of our work is that the information matrix, i.e. the inverse of the co_variance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost_effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical eval_uations on various benchmarks show the competi_tiveness of our approach over the current methods.

OriginalspracheEnglisch
Titel37th International Conference on Machine Learning, ICML 2020
Redakteure/-innenHal Daume, Aarti Singh
Herausgeber (Verlag)International Machine Learning Society (IMLS)
Seiten5658-5669
Seitenumfang12
ISBN (elektronisch)9781713821120
PublikationsstatusVeröffentlicht - 2020
Veranstaltung37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Dauer: 13 Juli 202018 Juli 2020

Publikationsreihe

Name37th International Conference on Machine Learning, ICML 2020
BandPartF168147-8

Konferenz

Konferenz37th International Conference on Machine Learning, ICML 2020
OrtVirtual, Online
Zeitraum13/07/2018/07/20

Fingerprint

Untersuchen Sie die Forschungsthemen von „Estimating model uncertainty of neural networks in sparse information form“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren