TY - GEN
T1 - Predicting Safety Misbehaviours in Autonomous Driving Systems Using Uncertainty Quantification
AU - Grewal, Ruben
AU - Tonella, Paolo
AU - Stocco, Andrea
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The automated real-time recognition of unexpected situations plays a crucial role in the safety of autonomous vehicles, especially in unsupported and unpredictable scenarios. This paper evaluates different Bayesian uncertainty quantification methods from the deep learning domain for the anticipa-tory testing of safety-critical misbehaviours during system-level simulation-based testing. Specifically, we compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions that can be used to distinguish safe from failure-inducing driving behaviors. In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two Bayesian uncertainty quantification methods, namely Me-Dropout and Deep Ensembles, for misbehaviour avoidance. Over-all, for three benchmarks from the Udacity simulator comprising both out-of-distribution and unsafe conditions introduced via mutation testing, both methods successfully detected a high number of out-of-bounds episodes providing early warnings several seconds in advance, outperforming two state-of-the-art misbehaviour prediction methods based on autoencoders and attention maps in terms of effectiveness and efficiency. Notably, Deep Ensembles detected most misbehaviours without any false alarms and did so even when employing a relatively small number of models, making them computationally feasible for real-time detection. Our findings suggest that incorporating uncertainty quantification methods is a viable approach for building fail-safe mechanisms in deep neural network-based autonomous vehicles.
AB - The automated real-time recognition of unexpected situations plays a crucial role in the safety of autonomous vehicles, especially in unsupported and unpredictable scenarios. This paper evaluates different Bayesian uncertainty quantification methods from the deep learning domain for the anticipa-tory testing of safety-critical misbehaviours during system-level simulation-based testing. Specifically, we compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions that can be used to distinguish safe from failure-inducing driving behaviors. In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two Bayesian uncertainty quantification methods, namely Me-Dropout and Deep Ensembles, for misbehaviour avoidance. Over-all, for three benchmarks from the Udacity simulator comprising both out-of-distribution and unsafe conditions introduced via mutation testing, both methods successfully detected a high number of out-of-bounds episodes providing early warnings several seconds in advance, outperforming two state-of-the-art misbehaviour prediction methods based on autoencoders and attention maps in terms of effectiveness and efficiency. Notably, Deep Ensembles detected most misbehaviours without any false alarms and did so even when employing a relatively small number of models, making them computationally feasible for real-time detection. Our findings suggest that incorporating uncertainty quantification methods is a viable approach for building fail-safe mechanisms in deep neural network-based autonomous vehicles.
KW - autonomous vehicles testing
KW - failure prediction
KW - self-driving cars
KW - uecertainty quantification
UR - http://www.scopus.com/inward/record.url?scp=85203833005&partnerID=8YFLogxK
U2 - 10.1109/ICST60714.2024.00016
DO - 10.1109/ICST60714.2024.00016
M3 - Conference contribution
AN - SCOPUS:85203833005
T3 - Proceedings - 2024 IEEE Conference on Software Testing, Verification and Validation, ICST 2024
SP - 70
EP - 81
BT - Proceedings - 2024 IEEE Conference on Software Testing, Verification and Validation, ICST 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th IEEE Conference on Software Testing, Verification and Validation, ICST 2024
Y2 - 27 May 2024 through 31 May 2024
ER -