Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?

Anna Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

26 Zitate (Scopus)

Abstract

Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models. DBU models predict the parameters of a Dirichlet distribution to provide fast, high-quality uncertainty estimates alongside with class predictions. In this work, we present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks. Our results suggest that uncertainty estimates of DBU models are not robust w.r.t. three important tasks: (1) indicating correctly and wrongly classified samples; (2) detecting adversarial examples; and (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. Additionally, we explore the first approaches to make DBU models more robust. While adversarial training has a minor effect, our median smoothing based approach significantly increases robustness of DBU models.

OriginalspracheEnglisch
TitelProceedings of the 38th International Conference on Machine Learning, ICML 2021
Herausgeber (Verlag)ML Research Press
Seiten5707-5718
Seitenumfang12
ISBN (elektronisch)9781713845065
PublikationsstatusVeröffentlicht - 2021
Veranstaltung38th International Conference on Machine Learning, ICML 2021 - Virtual, Online
Dauer: 18 Juli 202124 Juli 2021

Publikationsreihe

NameProceedings of Machine Learning Research
Band139
ISSN (elektronisch)2640-3498

Konferenz

Konferenz38th International Conference on Machine Learning, ICML 2021
OrtVirtual, Online
Zeitraum18/07/2124/07/21

Fingerprint

Untersuchen Sie die Forschungsthemen von „Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren