Beyond In-Domain Scenarios: Robust Density-Aware Calibration

Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

2 Zitate (Scopus)

Abstract

Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications. While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios. We aim to bridge this gap by proposing DAC, an accuracy-preserving as well as Density-Aware Calibration method based on k-nearest-neighbors (KNN). In contrast to existing post-hoc methods, we utilize hidden layers of classifiers as a source for uncertainty-related information and study their importance. We show that DAC is a generic method that can readily be combined with state-of-the-art post-hoc methods. DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates. We demonstrate that DAC leads to consistently better calibration across a large number of model architectures, datasets, and metrics. Additionally, we show that DAC improves calibration substantially on recent large-scale neural networks pre-trained on vast amounts of data.

OriginalspracheEnglisch
Seiten (von - bis)34344-34368
Seitenumfang25
FachzeitschriftProceedings of Machine Learning Research
Jahrgang202
PublikationsstatusVeröffentlicht - 2023
Veranstaltung40th International Conference on Machine Learning, ICML 2023 - Honolulu, USA/Vereinigte Staaten
Dauer: 23 Juli 202329 Juli 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „Beyond In-Domain Scenarios: Robust Density-Aware Calibration“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren