TY - GEN
T1 - Multi-Agent Reinforcement Learning for Cooperative Vehicle Motion Control
AU - Ahmic, Kenan
AU - Ultsch, Johannes
AU - Brembeck, Jonathan
AU - Burschka, Darius
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The longitudinal and lateral low-level motion control of multiple vehicles within a platoon is a challenging task, since several different control objectives need to be solved: (i) Each vehicle in the platoon needs to follow the reference path, (ii) the leading vehicle needs to drive with a desired reference velocity, and (iii) the following vehicles need to maintain a safe spacing distance to their respective preceding vehicle. Typically, several distinct controllers are developed for each task individually, which increases both the engineering effort as well as the susceptibility to errors. We address this issue and present a cooperative low-level vehicle motion controller based on Multi-Agent Reinforcement Learning (MARL) that is able to solve all of the above-mentioned control objectives for both the leading vehicle and the following vehicles. Therefore, we apply parameter sharing within MARL to update a single control policy in a centralized fashion using the experiences of all vehicles in the environment. Additionally, we utilize the concept of agent indication during the training process and enable the policy to specialize on the control objectives of the vehicle it is currently controlling. This leads to a unifying control approach and makes the development of further controllers redundant. The simulative assessment demonstrates the effectiveness of learned policy and shows that it is able to successfully solve all of the above-mentioned control objectives for both vehicles roles, even on unseen paths.
AB - The longitudinal and lateral low-level motion control of multiple vehicles within a platoon is a challenging task, since several different control objectives need to be solved: (i) Each vehicle in the platoon needs to follow the reference path, (ii) the leading vehicle needs to drive with a desired reference velocity, and (iii) the following vehicles need to maintain a safe spacing distance to their respective preceding vehicle. Typically, several distinct controllers are developed for each task individually, which increases both the engineering effort as well as the susceptibility to errors. We address this issue and present a cooperative low-level vehicle motion controller based on Multi-Agent Reinforcement Learning (MARL) that is able to solve all of the above-mentioned control objectives for both the leading vehicle and the following vehicles. Therefore, we apply parameter sharing within MARL to update a single control policy in a centralized fashion using the experiences of all vehicles in the environment. Additionally, we utilize the concept of agent indication during the training process and enable the policy to specialize on the control objectives of the vehicle it is currently controlling. This leads to a unifying control approach and makes the development of further controllers redundant. The simulative assessment demonstrates the effectiveness of learned policy and shows that it is able to successfully solve all of the above-mentioned control objectives for both vehicles roles, even on unseen paths.
UR - http://www.scopus.com/inward/record.url?scp=105001707813&partnerID=8YFLogxK
U2 - 10.1109/ITSC58415.2024.10920154
DO - 10.1109/ITSC58415.2024.10920154
M3 - Conference contribution
AN - SCOPUS:105001707813
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 2637
EP - 2644
BT - 2024 IEEE 27th International Conference on Intelligent Transportation Systems, ITSC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 27th IEEE International Conference on Intelligent Transportation Systems, ITSC 2024
Y2 - 24 September 2024 through 27 September 2024
ER -