A Review of Safe Reinforcement Learning: Methods, Theories and Applications

Shangding Gu, Long Yang, Yali Du, Guang Chen, Florian Walter, Jun Wang, Alois Knoll

Research output: Contribution to journalReview articlepeer-review

Abstract

Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. Firstly, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as '2H3W'. Secondly, we analyze the algorithm and theory progress from the perspectives of answering the '2H3W' problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing major safe RL algorithms at the link.

Keywords

  • constrained Markov decision processes
  • Safe reinforcement learning
  • safety optimisation
  • safety problems

Fingerprint

Dive into the research topics of 'A Review of Safe Reinforcement Learning: Methods, Theories and Applications'. Together they form a unique fingerprint.

Cite this