PAC statistical model checking of mean payoff in discrete- and continuous-time MDP

Chaitanya Agarwal, Shibashis Guha, Jan Křetínský, M. Pazhamalai

Research output: Contribution to journalArticlepeer-review

Abstract

Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.

Original languageEnglish
JournalFormal Methods in System Design
DOIs
StateAccepted/In press - 2024

Keywords

  • Markov decision processes
  • Mean payoff
  • Reinforcement learning
  • Statistical model checking

Fingerprint

Dive into the research topics of 'PAC statistical model checking of mean payoff in discrete- and continuous-time MDP'. Together they form a unique fingerprint.

Cite this