TY - JOUR
T1 - PAC statistical model checking of mean payoff in discrete- and continuous-time MDP
AU - Agarwal, Chaitanya
AU - Guha, Shibashis
AU - Křetínský, Jan
AU - Pazhamalai, M.
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024
Y1 - 2024
N2 - Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
AB - Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
KW - Markov decision processes
KW - Mean payoff
KW - Reinforcement learning
KW - Statistical model checking
UR - http://www.scopus.com/inward/record.url?scp=85201395144&partnerID=8YFLogxK
U2 - 10.1007/s10703-024-00463-0
DO - 10.1007/s10703-024-00463-0
M3 - Article
AN - SCOPUS:85201395144
SN - 0925-9856
JO - Formal Methods in System Design
JF - Formal Methods in System Design
ER -