TY - JOUR
T1 - FedBEVT
T2 - Federated Learning Bird's Eye View Perception Transformer in Road Traffic Systems
AU - Song, Rui
AU - Xu, Runsheng
AU - Festag, Andreas
AU - Ma, Jiaqi
AU - Knoll, Alois
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - Bird's eye view (BEV) perception is becoming increasingly important in the field of autonomous driving. It uses multi-view camera data to learn a transformer model that directly projects the perception of the road environment onto the BEV perspective. However, training a transformer model often requires a large amount of data, and as camera data for road traffic are often private, they are typically not shared. Federated learning offers a solution that enables clients to collaborate and train models without exchanging data but model parameters. In this article, we introduce FedBEVT, a federated transformer learning approach for BEV perception. In order to address two common data heterogeneity issues in FedBEVT: (i) diverse sensor poses, and (ii) varying sensor numbers in perception systems, we propose two approaches - Federated Learning with Camera-Attentive Personalization (FedCaP) and Adaptive Multi-Camera Masking (AMCM), respectively. To evaluate our method in real-world settings, we create a dataset consisting of four typical federated use cases. Our findings suggest that FedBEVT outperforms the baseline approaches in all four use cases, demonstrating the potential of our approach for improving BEV perception in autonomous driving.
AB - Bird's eye view (BEV) perception is becoming increasingly important in the field of autonomous driving. It uses multi-view camera data to learn a transformer model that directly projects the perception of the road environment onto the BEV perspective. However, training a transformer model often requires a large amount of data, and as camera data for road traffic are often private, they are typically not shared. Federated learning offers a solution that enables clients to collaborate and train models without exchanging data but model parameters. In this article, we introduce FedBEVT, a federated transformer learning approach for BEV perception. In order to address two common data heterogeneity issues in FedBEVT: (i) diverse sensor poses, and (ii) varying sensor numbers in perception systems, we propose two approaches - Federated Learning with Camera-Attentive Personalization (FedCaP) and Adaptive Multi-Camera Masking (AMCM), respectively. To evaluate our method in real-world settings, we create a dataset consisting of four typical federated use cases. Our findings suggest that FedBEVT outperforms the baseline approaches in all four use cases, demonstrating the potential of our approach for improving BEV perception in autonomous driving.
KW - Federated learning
KW - bird's eye view
KW - cooperative intelligent transportation systems
KW - road environmental perception
KW - vision transformer
UR - http://www.scopus.com/inward/record.url?scp=85170555495&partnerID=8YFLogxK
U2 - 10.1109/TIV.2023.3310674
DO - 10.1109/TIV.2023.3310674
M3 - Article
AN - SCOPUS:85170555495
SN - 2379-8858
VL - 9
SP - 958
EP - 969
JO - IEEE Transactions on Intelligent Vehicles
JF - IEEE Transactions on Intelligent Vehicles
IS - 1
ER -