TY - GEN
T1 - Risk-Averse Optimization of Total Rewards in Markovian Models Using Deviation Measures
AU - Baier, Christel
AU - Piribauer, Jakob
AU - Starke, Maximilian
N1 - Publisher Copyright:
© Christel Baier, Jakob Piribauer, and Maximilian Starke.
PY - 2024/9
Y1 - 2024/9
N2 - This paper addresses objectives tailored to the risk-averse optimization of accumulated rewards in Markov decision processes (MDPs). The studied objectives require maximizing the expected value of the accumulated rewards minus a penalty factor times a deviation measure of the resulting distribution of rewards. Using the variance in this penalty mechanism leads to the variance-penalized expectation (VPE) for which it is known that optimal schedulers have to minimize future expected rewards when a high amount of rewards has been accumulated. This behavior is undesirable as risk-averse behavior should keep the probability of particularly low outcomes low, but not discourage the accumulation of additional rewards on already good executions. The paper investigates the semi-variance, which only takes outcomes below the expected value into account, the mean absolute deviation (MAD), and the semi-MAD as alternative deviation measures. Furthermore, a penalty mechanism that penalizes outcomes below a fixed threshold is studied. For all of these objectives, the properties of optimal schedulers are specified and in particular the question whether these objectives overcome the problem observed for the VPE is answered. Further, the resulting algorithmic problems on MDPs and Markov chains are investigated.
AB - This paper addresses objectives tailored to the risk-averse optimization of accumulated rewards in Markov decision processes (MDPs). The studied objectives require maximizing the expected value of the accumulated rewards minus a penalty factor times a deviation measure of the resulting distribution of rewards. Using the variance in this penalty mechanism leads to the variance-penalized expectation (VPE) for which it is known that optimal schedulers have to minimize future expected rewards when a high amount of rewards has been accumulated. This behavior is undesirable as risk-averse behavior should keep the probability of particularly low outcomes low, but not discourage the accumulation of additional rewards on already good executions. The paper investigates the semi-variance, which only takes outcomes below the expected value into account, the mean absolute deviation (MAD), and the semi-MAD as alternative deviation measures. Furthermore, a penalty mechanism that penalizes outcomes below a fixed threshold is studied. For all of these objectives, the properties of optimal schedulers are specified and in particular the question whether these objectives overcome the problem observed for the VPE is answered. Further, the resulting algorithmic problems on MDPs and Markov chains are investigated.
KW - Markov decision processes
KW - deviation measures
KW - risk-aversion
KW - total reward
UR - http://www.scopus.com/inward/record.url?scp=85203560367&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.CONCUR.2024.9
DO - 10.4230/LIPIcs.CONCUR.2024.9
M3 - Conference contribution
AN - SCOPUS:85203560367
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 35th International Conference on Concurrency Theory, CONCUR 2024
A2 - Majumdar, Rupak
A2 - Silva, Alexandra
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 35th International Conference on Concurrency Theory, CONCUR 2024
Y2 - 9 September 2024 through 13 September 2024
ER -