TY - GEN
T1 - Learning-based mean-payo optimization in an unknown MDP under omega-regular constraints
AU - Kretínský, Jan
AU - Pérez, Guillermo A.
AU - Raskin, Jean François
N1 - Publisher Copyright:
© Jan K etínský, Guillermo A. Pérez, and Jean-François Raskin.
PY - 2018/8/1
Y1 - 2018/8/1
N2 - We formalize the problem of maximizing the mean-payo value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payo objectives can be achieved depending on how much memory one is willing to use. (i) For all ε and γ we can construct an online-learning finite-memory strategy that almost-surely satisfies the parity objective and which achieves an ε-optimal mean payo with probability at least 1 − γ. (ii) Alternatively, for all ε and γ there exists an online-learning infinite-memory strategy that satisfies the parity objective surely and which achieves an ε-optimal mean payo with probability at least 1 − γ. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.
AB - We formalize the problem of maximizing the mean-payo value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payo objectives can be achieved depending on how much memory one is willing to use. (i) For all ε and γ we can construct an online-learning finite-memory strategy that almost-surely satisfies the parity objective and which achieves an ε-optimal mean payo with probability at least 1 − γ. (ii) Alternatively, for all ε and γ there exists an online-learning infinite-memory strategy that satisfies the parity objective surely and which achieves an ε-optimal mean payo with probability at least 1 − γ. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.
KW - Beyond worst case
KW - Phrases Markov decision processes
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85053629887&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.CONCUR.2018.8
DO - 10.4230/LIPIcs.CONCUR.2018.8
M3 - Conference contribution
AN - SCOPUS:85053629887
SN - 9783959770873
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 29th International Conference on Concurrency Theory, CONCUR 2018
A2 - Schewe, Sven
A2 - Zhang, Lijun
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 29th International Conference on Concurrency Theory, CONCUR 2018
Y2 - 4 September 2018 through 7 September 2018
ER -