TY - GEN
T1 - Temp-frustum net
T2 - 32nd IEEE Intelligent Vehicles Symposium, IV 2021
AU - Ercelik, Emec
AU - Yurtsever, Ekim
AU - Knoll, Alois
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/11
Y1 - 2021/7/11
N2 - 3D object detection is a core component of automated driving systems. State-of-the-art methods fuse RGB imagery and LiDAR point cloud data frame-by-frame for 3D bounding box regression. However, frame-by-frame 3D object detection suffers from noise, field-of-view obstruction, and sparsity. We propose a novel Temporal Fusion Module (TFM) to use information from previous time-steps to mitigate these problems. First, a state-of-the-art frustum network extracts point cloud features from raw RGB and LiDAR point cloud data frame-by-frame. Then, our TFM module fuses these features with a recurrent neural network. As a result, 3D object detection becomes robust against single frame failures and transient occlusions. Experiments on the KITTI object tracking dataset show the efficiency of the proposed TFM, where we obtain 6%, 4%, and 6% improvements on Car, Pedestrian, and Cyclist classes, respectively, compared to frame-by-frame baselines. Furthermore, ablation studies reinforce that the subject of improvement is temporal fusion and show the effects of different placements of TFM in the object detection pipeline. Our code is open-source and available at https://github.com/emecercelik/Temp-Frustum-Net.git.
AB - 3D object detection is a core component of automated driving systems. State-of-the-art methods fuse RGB imagery and LiDAR point cloud data frame-by-frame for 3D bounding box regression. However, frame-by-frame 3D object detection suffers from noise, field-of-view obstruction, and sparsity. We propose a novel Temporal Fusion Module (TFM) to use information from previous time-steps to mitigate these problems. First, a state-of-the-art frustum network extracts point cloud features from raw RGB and LiDAR point cloud data frame-by-frame. Then, our TFM module fuses these features with a recurrent neural network. As a result, 3D object detection becomes robust against single frame failures and transient occlusions. Experiments on the KITTI object tracking dataset show the efficiency of the proposed TFM, where we obtain 6%, 4%, and 6% improvements on Car, Pedestrian, and Cyclist classes, respectively, compared to frame-by-frame baselines. Furthermore, ablation studies reinforce that the subject of improvement is temporal fusion and show the effects of different placements of TFM in the object detection pipeline. Our code is open-source and available at https://github.com/emecercelik/Temp-Frustum-Net.git.
UR - http://www.scopus.com/inward/record.url?scp=85118390015&partnerID=8YFLogxK
U2 - 10.1109/IV48863.2021.9575392
DO - 10.1109/IV48863.2021.9575392
M3 - Conference contribution
AN - SCOPUS:85118390015
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 1095
EP - 1101
BT - 32nd IEEE Intelligent Vehicles Symposium, IV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 July 2021 through 17 July 2021
ER -