TY - GEN
T1 - Towards Engineered Safe AI with Modular Concept Models
AU - Heidemann, Lena
AU - Kurzidem, Iwo
AU - Monnet, Maureen
AU - Roscher, Karsten
AU - Günnemann, Stephan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The inherent complexity and uncertainty of Machine Learning (ML) makes it difficult for ML-based Computer Vision (CV) approaches to become prevalent in safety-critical domains like autonomous driving, despite their high performance. A crucial challenge in these domains is the safety assurance of ML-based systems. To address this, recent safety standardization in the automotive domain has introduced an ML safety lifecycle following an iterative development process. While this approach facilitates safety assurance, its iterative nature requires frequent adaptation and optimization of the ML function, which might include costly retraining of the ML model and is not guaranteed to converge to a safe AI solution. In this paper, we propose a modular ML approach which allows for more efficient and targeted measures to each of the modules and process steps. Each module of the modular concept model represents one visual concept and is aggregated with the other modules' outputs into a task output. The design choices of a modular concept model can be categorized into the selection of the concept modules, the aggregation of their output and the training of the concept modules. Using the example of traffic sign classification, we present each step of the involved design choices and the corresponding targeted measures to take in an iterative development process for engineering safe AI.
AB - The inherent complexity and uncertainty of Machine Learning (ML) makes it difficult for ML-based Computer Vision (CV) approaches to become prevalent in safety-critical domains like autonomous driving, despite their high performance. A crucial challenge in these domains is the safety assurance of ML-based systems. To address this, recent safety standardization in the automotive domain has introduced an ML safety lifecycle following an iterative development process. While this approach facilitates safety assurance, its iterative nature requires frequent adaptation and optimization of the ML function, which might include costly retraining of the ML model and is not guaranteed to converge to a safe AI solution. In this paper, we propose a modular ML approach which allows for more efficient and targeted measures to each of the modules and process steps. Each module of the modular concept model represents one visual concept and is aggregated with the other modules' outputs into a task output. The design choices of a modular concept model can be categorized into the selection of the concept modules, the aggregation of their output and the training of the concept modules. Using the example of traffic sign classification, we present each step of the involved design choices and the corresponding targeted measures to take in an iterative development process for engineering safe AI.
KW - Computer Vision
KW - Concept Models
KW - Deep Neural Networks
KW - Explainable AI
KW - Interpretable Models
KW - ML Safety
KW - Machine Learning
KW - Modular Concept Models
KW - Modular Deep Learning
KW - Safe AI
KW - Safe ML
UR - http://www.scopus.com/inward/record.url?scp=85206437721&partnerID=8YFLogxK
U2 - 10.1109/CVPRW63382.2024.00360
DO - 10.1109/CVPRW63382.2024.00360
M3 - Conference contribution
AN - SCOPUS:85206437721
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 3564
EP - 3573
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
Y2 - 16 June 2024 through 22 June 2024
ER -