TY - GEN
T1 - Stopping Criteria for Value Iteration on Stochastic Games with Quantitative Objectives
AU - Kretinsky, Jan
AU - Meggendorfer, Tobias
AU - Weininger, Maximilian
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - A classic solution technique for Markov decision processes (MDP) and stochastic games (SG) is value iteration (VI). Due to its good practical performance, this approximative approach is typically preferred over exact techniques, even though no practical bounds on the imprecision of the result could be given until recently. As a consequence, even the most used model checkers could return arbitrarily wrong results. Over the past decade, different works derived stopping criteria, indicating when the precision reaches the desired level, for various settings, in particular MDP with reachability, total reward, and mean payoff, and SG with reachability.In this paper, we provide the first stopping criteria for VI on SG with total reward and mean payoff, yielding the first anytime algorithms in these settings. To this end, we provide the solution in two flavours: First through a reduction to the MDP case and second directly on SG. The former is simpler and automatically utilizes any advances on MDP. The latter allows for more local computations, heading towards better practical efficiency.Our solution unifies the previously mentioned approaches for MDP and SG and their underlying ideas. To achieve this, we isolate objective-specific subroutines as well as identify objective-independent concepts. These structural concepts, while surprisingly simple, form the very essence of the unified solution.
AB - A classic solution technique for Markov decision processes (MDP) and stochastic games (SG) is value iteration (VI). Due to its good practical performance, this approximative approach is typically preferred over exact techniques, even though no practical bounds on the imprecision of the result could be given until recently. As a consequence, even the most used model checkers could return arbitrarily wrong results. Over the past decade, different works derived stopping criteria, indicating when the precision reaches the desired level, for various settings, in particular MDP with reachability, total reward, and mean payoff, and SG with reachability.In this paper, we provide the first stopping criteria for VI on SG with total reward and mean payoff, yielding the first anytime algorithms in these settings. To this end, we provide the solution in two flavours: First through a reduction to the MDP case and second directly on SG. The former is simpler and automatically utilizes any advances on MDP. The latter allows for more local computations, heading towards better practical efficiency.Our solution unifies the previously mentioned approaches for MDP and SG and their underlying ideas. To achieve this, we isolate objective-specific subroutines as well as identify objective-independent concepts. These structural concepts, while surprisingly simple, form the very essence of the unified solution.
KW - Stochastic games
KW - value iteration
UR - http://www.scopus.com/inward/record.url?scp=85166017438&partnerID=8YFLogxK
U2 - 10.1109/LICS56636.2023.10175771
DO - 10.1109/LICS56636.2023.10175771
M3 - Conference contribution
AN - SCOPUS:85166017438
T3 - Proceedings - Symposium on Logic in Computer Science
BT - 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 38th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2023
Y2 - 26 June 2023 through 29 June 2023
ER -