TY - GEN
T1 - Strategy representation by decision trees with linear classifiers
AU - Ashok, Pranav
AU - Brázdil, Tomáš
AU - Chatterjee, Krishnendu
AU - Křetínský, Jan
AU - Lampert, Christoph H.
AU - Toman, Viktor
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019
Y1 - 2019
N2 - Graph games and Markov decision processes (MDPs) are standard models in reactive synthesis and verification of probabilistic systems with nondeterminism. The class of ω-regular winning conditions; e.g., safety, reachability, liveness, parity conditions; provides a robust and expressive specification formalism for properties that arise in analysis of reactive systems. The resolutions of nondeterminism in games and MDPs are represented as strategies, and we consider succinct representation of such strategies. The decision-tree data structure from machine learning retains the flavor of decisions of strategies and allows entropy-based minimization to obtain succinct trees. However, in contrast to traditional machine-learning problems where small errors are allowed, for winning strategies in graph games and MDPs no error is allowed, and the decision tree must represent the entire strategy. In this work we propose decision trees with linear classifiers for representation of strategies in graph games and MDPs. We have implemented strategy representation using this data structure and we present experimental results for problems on graph games and MDPs, which show that this new data structure presents a much more efficient strategy representation as compared to standard decision trees.
AB - Graph games and Markov decision processes (MDPs) are standard models in reactive synthesis and verification of probabilistic systems with nondeterminism. The class of ω-regular winning conditions; e.g., safety, reachability, liveness, parity conditions; provides a robust and expressive specification formalism for properties that arise in analysis of reactive systems. The resolutions of nondeterminism in games and MDPs are represented as strategies, and we consider succinct representation of such strategies. The decision-tree data structure from machine learning retains the flavor of decisions of strategies and allows entropy-based minimization to obtain succinct trees. However, in contrast to traditional machine-learning problems where small errors are allowed, for winning strategies in graph games and MDPs no error is allowed, and the decision tree must represent the entire strategy. In this work we propose decision trees with linear classifiers for representation of strategies in graph games and MDPs. We have implemented strategy representation using this data structure and we present experimental results for problems on graph games and MDPs, which show that this new data structure presents a much more efficient strategy representation as compared to standard decision trees.
UR - http://www.scopus.com/inward/record.url?scp=85072862788&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-30281-8_7
DO - 10.1007/978-3-030-30281-8_7
M3 - Conference contribution
AN - SCOPUS:85072862788
SN - 9783030302801
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 109
EP - 128
BT - Quantitative Evaluation of Systems - 16th International Conference, QEST 2019, Proceedings
A2 - Parker, David
A2 - Wolf, Verena
PB - Springer Verlag
T2 - 16th International Conference on Quantitative Evaluation of Systems, QEST 2019
Y2 - 10 September 2019 through 12 September 2019
ER -