TY - GEN
T1 - A Safety-Adapted Loss for Pedestrian Detection in Autonomous Driving
AU - Lyssenko, Maria
AU - Pimplikar, Piyush
AU - Bieshaar, Maarten
AU - Nozarian, Farzad
AU - Triebel, Rudolph
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In safety-critical domains like autonomous driving (AD), errors by the object detector may endanger pedestrians and other vulnerable road users (VRU). As raw evaluation metrics are not an adequate safety indicator, recent works leverage domain knowledge to identify safety-relevant VRU, and to back-annotate the criticality of the interaction to the object detector. However, those approaches do not consider the safety factor in the deep neural network (DNN) training process. Thus, state-of-the-art DNN penalize all misdetections equally irrespective of their importance for the safe driving task. Hence, to mitigate the occurrence of safety-critical failure cases like false negatives, a safety-aware training strategy is needed to enhance the detection performance for critical pedestrians. In this paper, we propose a novel, safety-adapted loss variation that leverages the estimated per-pedestrian criticality during training. Therefore, we exploit the reachable set-based time-to-collision (TTCRSB) metric from the motion domain along with distance information to account for the worst-case threat. Our evaluation results using RetinaNet and FCOS on the nuScenes dataset demonstrate that training the models with our safety-adapted loss function mitigates the misdetection of safety-critical pedestrians with robust performance for the general case, i.e., safety-irrelevant pedestrians.
AB - In safety-critical domains like autonomous driving (AD), errors by the object detector may endanger pedestrians and other vulnerable road users (VRU). As raw evaluation metrics are not an adequate safety indicator, recent works leverage domain knowledge to identify safety-relevant VRU, and to back-annotate the criticality of the interaction to the object detector. However, those approaches do not consider the safety factor in the deep neural network (DNN) training process. Thus, state-of-the-art DNN penalize all misdetections equally irrespective of their importance for the safe driving task. Hence, to mitigate the occurrence of safety-critical failure cases like false negatives, a safety-aware training strategy is needed to enhance the detection performance for critical pedestrians. In this paper, we propose a novel, safety-adapted loss variation that leverages the estimated per-pedestrian criticality during training. Therefore, we exploit the reachable set-based time-to-collision (TTCRSB) metric from the motion domain along with distance information to account for the worst-case threat. Our evaluation results using RetinaNet and FCOS on the nuScenes dataset demonstrate that training the models with our safety-adapted loss function mitigates the misdetection of safety-critical pedestrians with robust performance for the general case, i.e., safety-irrelevant pedestrians.
UR - http://www.scopus.com/inward/record.url?scp=85202429565&partnerID=8YFLogxK
U2 - 10.1109/ICRA57147.2024.10610038
DO - 10.1109/ICRA57147.2024.10610038
M3 - Conference contribution
AN - SCOPUS:85202429565
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 4428
EP - 4434
BT - 2024 IEEE International Conference on Robotics and Automation, ICRA 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Robotics and Automation, ICRA 2024
Y2 - 13 May 2024 through 17 May 2024
ER -