TY - GEN
T1 - A Safe Reinforcement Learning driven Weights-varying Model Predictive Control for Autonomous Vehicle Motion Control
AU - Zarrouki, Baha
AU - Spanakakis, Marios
AU - Betz, Johannes
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Determining the optimal cost function parameters of Model Predictive Control (MPC) to optimize multiple control objectives is a challenging and time-consuming task. Multi-objective Bayesian Optimization (BO) techniques solve this problem by determining a Pareto optimal parameter set for an MPC with static weights. However, a single parameter set may not deliver the most optimal closed-loop control performance when the context of the MPC operating conditions changes during its operation, urging the need to adapt the cost function weights at runtime. Deep Reinforcement Learning (RL) algorithms can automatically learn context-dependent optimal parameter sets and dynamically adapt for a Weights-varying MPC (WMPC). However, learning cost function weights from scratch in a continuous action space may lead to unsafe operating states. To solve this, we propose a novel approach limiting the RL action space within a safe learning space that we represent by a catalog of pre-optimized feasible BO Pareto-optimal weight sets. We conceive an RL agent not to learn in a continuous space but to select the most optimal discrete actions, each corresponding to a single set of Pareto optimal weights, by proactively anticipating upcoming control tasks in a context-dependent manner. This approach introduces a two-step optimization: (1) safety-critical with BO and (2) performance-driven with RL. Hence, even an untrained RL agent guarantees a safe and optimal performance. Simulation results demonstrate that an untrained RL-WMPC shows Pareto-optimal closed-loop behavior and training the RL-WMPC helps exhibit a performance beyond the Pareto-front. The code used in this research is publicly accessible as open-source software: https://github.com/bzarr/TUM-CONTROL
AB - Determining the optimal cost function parameters of Model Predictive Control (MPC) to optimize multiple control objectives is a challenging and time-consuming task. Multi-objective Bayesian Optimization (BO) techniques solve this problem by determining a Pareto optimal parameter set for an MPC with static weights. However, a single parameter set may not deliver the most optimal closed-loop control performance when the context of the MPC operating conditions changes during its operation, urging the need to adapt the cost function weights at runtime. Deep Reinforcement Learning (RL) algorithms can automatically learn context-dependent optimal parameter sets and dynamically adapt for a Weights-varying MPC (WMPC). However, learning cost function weights from scratch in a continuous action space may lead to unsafe operating states. To solve this, we propose a novel approach limiting the RL action space within a safe learning space that we represent by a catalog of pre-optimized feasible BO Pareto-optimal weight sets. We conceive an RL agent not to learn in a continuous space but to select the most optimal discrete actions, each corresponding to a single set of Pareto optimal weights, by proactively anticipating upcoming control tasks in a context-dependent manner. This approach introduces a two-step optimization: (1) safety-critical with BO and (2) performance-driven with RL. Hence, even an untrained RL agent guarantees a safe and optimal performance. Simulation results demonstrate that an untrained RL-WMPC shows Pareto-optimal closed-loop behavior and training the RL-WMPC helps exhibit a performance beyond the Pareto-front. The code used in this research is publicly accessible as open-source software: https://github.com/bzarr/TUM-CONTROL
UR - http://www.scopus.com/inward/record.url?scp=85199775091&partnerID=8YFLogxK
U2 - 10.1109/IV55156.2024.10588747
DO - 10.1109/IV55156.2024.10588747
M3 - Conference contribution
AN - SCOPUS:85199775091
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 1401
EP - 1408
BT - 35th IEEE Intelligent Vehicles Symposium, IV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 35th IEEE Intelligent Vehicles Symposium, IV 2024
Y2 - 2 June 2024 through 5 June 2024
ER -