TY - JOUR
T1 - Enhancing State Representation in Multi-Agent Reinforcement Learning for Platoon-Following Models
AU - Lin, Hongyi
AU - Lyu, Cheng
AU - He, Yixu
AU - Liu, Yang
AU - Gao, Kun
AU - Qu, Xiaobo
N1 - Publisher Copyright:
© 1967-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - With the growing prevalence of autonomous vehicles and the integration of intelligent and connected technologies, the demand for effective and reliable vehicle speed control algorithms has become increasingly critical. Traditional car-following models, which primarily focus on individual vehicle pairs, exhibit limitations in complex traffic environments. To this end, this paper proposes an enhanced state representation for the application of multi-agent reinforcement learning (MARL) in platoon-following scenarios. Specifically, the proposed representation, influenced by feature engineering techniques in time series prediction tasks, thoroughly accounts for the intricate relative relationships between different vehicles within a platoon and can offer a distinctive perspective on traffic conditions to help improve the performance of MARL models. Experimental results show that the proposed method demonstrates superior performance in platoon-following scenarios across key metrics such as the time gap, distance gap, and speed, even reducing the time gap by 63%, compared with traditional state representation methods. These enhancements represent a significant step forward in ensuring the safety, efficiency, and reliability of platoon-following models within the context of autonomous vehicles.
AB - With the growing prevalence of autonomous vehicles and the integration of intelligent and connected technologies, the demand for effective and reliable vehicle speed control algorithms has become increasingly critical. Traditional car-following models, which primarily focus on individual vehicle pairs, exhibit limitations in complex traffic environments. To this end, this paper proposes an enhanced state representation for the application of multi-agent reinforcement learning (MARL) in platoon-following scenarios. Specifically, the proposed representation, influenced by feature engineering techniques in time series prediction tasks, thoroughly accounts for the intricate relative relationships between different vehicles within a platoon and can offer a distinctive perspective on traffic conditions to help improve the performance of MARL models. Experimental results show that the proposed method demonstrates superior performance in platoon-following scenarios across key metrics such as the time gap, distance gap, and speed, even reducing the time gap by 63%, compared with traditional state representation methods. These enhancements represent a significant step forward in ensuring the safety, efficiency, and reliability of platoon-following models within the context of autonomous vehicles.
KW - Feature engineering
KW - multi-agent reinforcement learning (MARL)
KW - state representation
KW - trajectory control
UR - http://www.scopus.com/inward/record.url?scp=85187981144&partnerID=8YFLogxK
U2 - 10.1109/TVT.2024.3373533
DO - 10.1109/TVT.2024.3373533
M3 - Article
AN - SCOPUS:85187981144
SN - 0018-9545
VL - 73
SP - 12110
EP - 12114
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 8
ER -