TY - GEN
T1 - A Reinforcement Learning-Boosted Motion Planning Framework
T2 - 35th IEEE Intelligent Vehicles Symposium, IV 2024
AU - Trauth, Rainer
AU - Hobmeier, Alexander
AU - Betz, Johannes
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This study introduces a novel approach to autonomous motion planning, informing an analytical algorithm with a reinforcement learning (RL) agent within a Frenet coordinate system. The combination directly addresses the challenges of adaptability and safety in autonomous driving. Motion planning algorithms are essential for navigating dynamic and complex scenarios. Traditional methods, however, lack the flexibility required for unpredictable environments, whereas machine learning techniques, particularly reinforcement learning (RL), offer adaptability but suffer from instability and a lack of explainability. Our unique solution synergizes the predictability and stability of traditional motion planning algorithms with the dynamic adaptability of RL, resulting in a system that efficiently manages complex situations and adapts to changing environmental conditions. Evaluation of our integrated approach shows a significant reduction in collisions, improved risk management, and improved goal success rates across multiple scenarios. The code used in this research is publicly available as open-source software and can be accessed at the following link: https://github.com/TUM-AVS/Frenetix-RL.
AB - This study introduces a novel approach to autonomous motion planning, informing an analytical algorithm with a reinforcement learning (RL) agent within a Frenet coordinate system. The combination directly addresses the challenges of adaptability and safety in autonomous driving. Motion planning algorithms are essential for navigating dynamic and complex scenarios. Traditional methods, however, lack the flexibility required for unpredictable environments, whereas machine learning techniques, particularly reinforcement learning (RL), offer adaptability but suffer from instability and a lack of explainability. Our unique solution synergizes the predictability and stability of traditional motion planning algorithms with the dynamic adaptability of RL, resulting in a system that efficiently manages complex situations and adapts to changing environmental conditions. Evaluation of our integrated approach shows a significant reduction in collisions, improved risk management, and improved goal success rates across multiple scenarios. The code used in this research is publicly available as open-source software and can be accessed at the following link: https://github.com/TUM-AVS/Frenetix-RL.
KW - Adaptive algorithms
KW - Autonomous vehicles
KW - Collision avoidance
KW - Reinforcement learning
KW - Robot learning
UR - http://www.scopus.com/inward/record.url?scp=85191644855&partnerID=8YFLogxK
U2 - 10.1109/IV55156.2024.10588750
DO - 10.1109/IV55156.2024.10588750
M3 - Conference contribution
AN - SCOPUS:85191644855
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 2413
EP - 2420
BT - 35th IEEE Intelligent Vehicles Symposium, IV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 June 2024 through 5 June 2024
ER -