CONTINUAL LEARNING WITH BAYESIAN NEURAL NETWORKS FOR NON-STATIONARY DATA

Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann

Publikation: KonferenzbeitragPapierBegutachtung

25 Zitate (Scopus)

Abstract

This work addresses continual learning for non-stationary data, using Bayesian neural networks and memory-based online variational Bayes. We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data. This raw data corresponds to likelihood terms that cannot be well approximated by the Gaussian. We introduce a novel method for sequentially updating both components of the posterior approximation. Furthermore, we propose Bayesian forgetting and a Gaussian diffusion process for adapting to non-stationary data. The experimental results show that our update method improves on existing approaches for streaming data. Additionally, the adaptation methods lead to better predictive performance for non-stationary data.

OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 2020
Veranstaltung8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Äthiopien
Dauer: 30 Apr. 2020 → …

Konferenz

Konferenz8th International Conference on Learning Representations, ICLR 2020
Land/GebietÄthiopien
OrtAddis Ababa
Zeitraum30/04/20 → …

Fingerprint

Untersuchen Sie die Forschungsthemen von „CONTINUAL LEARNING WITH BAYESIAN NEURAL NETWORKS FOR NON-STATIONARY DATA“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren