Koopman Kernel Regression

Petar Bevanda, Max Beier, Armin Lederer, Stefan Sosnowski, Eyke Hüllermeier, Sandra Hirche

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

4 Zitate (Scopus)

Abstract

Many machine learning approaches for decision making, such as reinforcement learning, rely on simulators or predictive models to forecast the time-evolution of quantities of interest, e.g., the state of an agent or the reward of a policy. Forecasts of such complex phenomena are commonly described by highly nonlinear dynamical systems, making their use in optimization-based decision-making challenging. Koopman operator theory offers a beneficial paradigm for addressing this problem by characterizing forecasts via linear time-invariant (LTI) ODEs, turning multistep forecasts into sparse matrix multiplication. Though there exists a variety of learning approaches, they usually lack crucial learning-theoretic guarantees, making the behavior of the obtained models with increasing data and dimensionality unclear. We address the aforementioned by deriving a universal Koopman-invariant reproducing kernel Hilbert space (RKHS) that solely spans transformations into LTI dynamical systems. The resulting Koopman Kernel Regression (KKR) framework enables the use of statistical learning tools from function approximation for novel convergence results and generalization error bounds under weaker assumptions than existing work. Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors in RKHS.

OriginalspracheEnglisch
FachzeitschriftAdvances in Neural Information Processing Systems
Jahrgang36
PublikationsstatusVeröffentlicht - 2023
Veranstaltung37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, USA/Vereinigte Staaten
Dauer: 10 Dez. 202316 Dez. 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „Koopman Kernel Regression“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren