TY - GEN
T1 - TUMTraf Intersection Dataset
T2 - 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023
AU - Zimmer, Walter
AU - Creß, Christian
AU - Nguyen, Huu Tung
AU - Knoll, Alois C.
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the TUM Traffic (TUMTraf) Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection gantry bridges. The data was labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten classes, it has a high diversity of road users in complex driving maneuvers, e.g. left/right turns, overtaking, and U-turns. In experiments, we provided baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data and code at https://innovation-mobility.com/tumtraf-dataset.
AB - Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the TUM Traffic (TUMTraf) Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection gantry bridges. The data was labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten classes, it has a high diversity of road users in complex driving maneuvers, e.g. left/right turns, overtaking, and U-turns. In experiments, we provided baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data and code at https://innovation-mobility.com/tumtraf-dataset.
KW - 3D Perception
KW - Autonomous Driving
KW - Camera
KW - Dataset
KW - Intelligent Transportation Systems
KW - LiDAR
UR - http://www.scopus.com/inward/record.url?scp=85183291518&partnerID=8YFLogxK
U2 - 10.1109/ITSC57777.2023.10422289
DO - 10.1109/ITSC57777.2023.10422289
M3 - Conference contribution
AN - SCOPUS:85183291518
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 1030
EP - 1037
BT - 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 24 September 2023 through 28 September 2023
ER -