TY - JOUR
T1 - A Review of Safe Reinforcement Learning
T2 - Methods, Theories and Applications
AU - Gu, Shangding
AU - Yang, Long
AU - Du, Yali
AU - Chen, Guang
AU - Walter, Florian
AU - Wang, Jun
AU - Knoll, Alois
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. Firstly, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as '2H3W'. Secondly, we analyze the algorithm and theory progress from the perspectives of answering the '2H3W' problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing major safe RL algorithms at the link.
AB - Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. Firstly, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as '2H3W'. Secondly, we analyze the algorithm and theory progress from the perspectives of answering the '2H3W' problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing major safe RL algorithms at the link.
KW - constrained Markov decision processes
KW - Safe reinforcement learning
KW - safety optimisation
KW - safety problems
UR - http://www.scopus.com/inward/record.url?scp=85204119035&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2024.3457538
DO - 10.1109/TPAMI.2024.3457538
M3 - Review article
AN - SCOPUS:85204119035
SN - 0162-8828
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -