Confidence-driven weighted retraining for predicting safety-critical failures in autonomous driving systems

Andrea Stocco, Paolo Tonella

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Safe handling of hazardous driving situations is a task of high practical relevance for building reliable and trustworthy cyber-physical systems such as autonomous driving systems. This task necessitates an accurate prediction system of the vehicle's confidence to prevent potentially harmful system failures on the occurrence of unpredictable conditions that make it less safe to drive. In this paper, we discuss the challenges of adapting a misbehavior predictor with knowledge mined during the execution of the main system. Then, we present a framework for the continual learning of misbehavior predictors, which records in-field behavioral data to determine what data are appropriate for adaptation. Our framework guides adaptive retraining using a novel combination of in-field confidence metric selection and reconstruction error-based weighing. We evaluate our framework to improve a misbehavior predictor from the literature on the Udacity simulator for self-driving cars. Our results show that our framework can reduce the false positive rate by a large margin and can adapt to nominal behavior drifts while maintaining the original capability to predict failures up to several seconds in advance.

Original languageEnglish
Article numbere2386
JournalJournal of Software: Evolution and Process
Volume34
Issue number10
DOIs
StatePublished - Oct 2022
Externally publishedYes

Keywords

  • AI testing
  • autonomous driving systems
  • continual learning
  • misbehavior prediction

Fingerprint

Dive into the research topics of 'Confidence-driven weighted retraining for predicting safety-critical failures in autonomous driving systems'. Together they form a unique fingerprint.

Cite this