Finite-memory near-optimal learning for Markov decision processes with long-run average reward

Jan Kretínský, Fabian Michel, Lukas Michel, Guillermo A. Pérez

Publikation: KonferenzbeitragPapierBegutachtung

2 Zitate (Scopus)

Abstract

We consider learning policies online in Markov decision processes with the long-run average reward (a.k.a. mean payoff). To ensure implementability of the policies, we focus on policies with finite memory. Firstly, we show that near optimality can be achieved almost surely, using an unintuitive gadget we call forgetfulness. Secondly, we extend the approach to a setting with partial knowledge of the system topology, introducing two optimality measures and providing near-optimal algorithms also for these cases.

OriginalspracheEnglisch
Seiten1149-1158
Seitenumfang10
PublikationsstatusVeröffentlicht - 2020
Veranstaltung36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 - Virtual, Online
Dauer: 3 Aug. 20206 Aug. 2020

Konferenz

Konferenz36th Conference on Uncertainty in Artificial Intelligence, UAI 2020
OrtVirtual, Online
Zeitraum3/08/206/08/20

Fingerprint

Untersuchen Sie die Forschungsthemen von „Finite-memory near-optimal learning for Markov decision processes with long-run average reward“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren