URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates

Michael Kirchhof, Seong Joon Oh, Bálint Mucsányi, Enkelejda Kasneci

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Representation learning has significantly driven the field to develop pretrained models that can act as a valuable starting point when transferring to new datasets.With the rising demand for reliable machine learning and uncertainty quantification, there is a need for pretrained models that not only provide embeddings but also transferable uncertainty estimates.To guide the development of such models, we propose the Uncertainty-aware Representation Learning (URL) benchmark.Besides the transferability of the representations, it also meaExamplessures the zero-shot transferability of the uncertainty estimate using a novel metric.We apply URL to evaluate eleven uncertainty quantifiers that are pretrained on ImageNet and transferred to eight downstream datasets.We find that approaches that focus on the uncertainty of the representation itself or estimate the prediction loss directly outperform those that are based on the probabilities of upstream classes.Yet, achieving transferable uncertainty quantification remains an open challenge.Our findings indicate that it is not necessarily in conflict with traditional representation learning goals.Code is available at https://github.com/mkirchhof/url.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Externally publishedYes
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Fingerprint

Dive into the research topics of 'URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates'. Together they form a unique fingerprint.

Cite this