TY - GEN
T1 - Enhancing Realistic Floating Car Observers in Microscopic Traffic Simulation
AU - Gerner, Jeremias
AU - Rößle, Dominik
AU - Cremers, Daniel
AU - Bogenberger, Klaus
AU - Schön, Torsten
AU - Schmidtner, Stefanie
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - We present a system that enables a realistic detection of traffic participants using Floating Car Observers (FCOs) directly within microscopic simulations. Point clouds are utilized to transform the two-dimensional simulation into a three-dimensional environment. In this environment, vehicles can be equipped with up to four camera sensors. Employing computer vision strategies, we identify which traffic participants would be detected by real-world sensor systems. We utilize the resulting system to generate datasets. In this process, various vehicles move within a simulation, recording which traffic participants would be recognized by the approach. Additionally, for each step of the simulation, an image from the current simulation and the position vectors of the traffic participants are documented. We employ the dataset to train neural networks, enabling them to replicate the results achieved using the CV method. The trained Vision Transformer and ResNet architectures achieve accuracies of up to 90%. Compared to the CV approach, the neural networks facilitate as much as 18-fold speed enhancement. We have made the source code, datasets, and trained models openly accessible at: github.com/urbanAIthi/SUMO-FCO.
AB - We present a system that enables a realistic detection of traffic participants using Floating Car Observers (FCOs) directly within microscopic simulations. Point clouds are utilized to transform the two-dimensional simulation into a three-dimensional environment. In this environment, vehicles can be equipped with up to four camera sensors. Employing computer vision strategies, we identify which traffic participants would be detected by real-world sensor systems. We utilize the resulting system to generate datasets. In this process, various vehicles move within a simulation, recording which traffic participants would be recognized by the approach. Additionally, for each step of the simulation, an image from the current simulation and the position vectors of the traffic participants are documented. We employ the dataset to train neural networks, enabling them to replicate the results achieved using the CV method. The trained Vision Transformer and ResNet architectures achieve accuracies of up to 90%. Compared to the CV approach, the neural networks facilitate as much as 18-fold speed enhancement. We have made the source code, datasets, and trained models openly accessible at: github.com/urbanAIthi/SUMO-FCO.
UR - http://www.scopus.com/inward/record.url?scp=85186517591&partnerID=8YFLogxK
U2 - 10.1109/ITSC57777.2023.10422398
DO - 10.1109/ITSC57777.2023.10422398
M3 - Conference contribution
AN - SCOPUS:85186517591
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 2396
EP - 2403
BT - 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023
Y2 - 24 September 2023 through 28 September 2023
ER -