TY - GEN
T1 - Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models
AU - Keser, Mert
AU - Schwalbe, Gesina
AU - Nowzad, Azarm
AU - Knoll, Alois
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Despite the unchallenged performance, deep neural network (DNN) based object detectors (OD) for computer vision have inherent, hard-to-verify limitations like brittleness, opacity, and unknown behavior on corner cases. Therefore, operation-time safety measures like monitors will be inevitable - even mandatory - for use in safety-critical applications like automated driving (AD). This paper presents an approach for plausibilization of OD detections using a small model-agnostic, robust, interpretable, and domain-invariant image classification model. The safety requirements of interpretability and robustness are achieved by using a small concept bottleneck model (CBM), a DNN intercepted by interpretable intermediate outputs. The domain-invariance is necessary for robustness against common domain shifts, and for cheap adaptation to diverse AD settings. While vanilla CBMs are here shown to fail in case of domain shifts like natural perturbations, we substantially improve the CBM via combination with trainable color-invariance filters developed for domain adaptation. Furthermore, the monitor that utilizes CBMs with trainable color-invarince filters is successfully applied in an AD OD setting for detection of hallucinated objects with zero-shot domain adaptation, and to false positive detection with fewshot adaptation, proving this to be a promising approach for error monitoring.
AB - Despite the unchallenged performance, deep neural network (DNN) based object detectors (OD) for computer vision have inherent, hard-to-verify limitations like brittleness, opacity, and unknown behavior on corner cases. Therefore, operation-time safety measures like monitors will be inevitable - even mandatory - for use in safety-critical applications like automated driving (AD). This paper presents an approach for plausibilization of OD detections using a small model-agnostic, robust, interpretable, and domain-invariant image classification model. The safety requirements of interpretability and robustness are achieved by using a small concept bottleneck model (CBM), a DNN intercepted by interpretable intermediate outputs. The domain-invariance is necessary for robustness against common domain shifts, and for cheap adaptation to diverse AD settings. While vanilla CBMs are here shown to fail in case of domain shifts like natural perturbations, we substantially improve the CBM via combination with trainable color-invariance filters developed for domain adaptation. Furthermore, the monitor that utilizes CBMs with trainable color-invarince filters is successfully applied in an AD OD setting for detection of hallucinated objects with zero-shot domain adaptation, and to false positive detection with fewshot adaptation, proving this to be a promising approach for error monitoring.
UR - http://www.scopus.com/inward/record.url?scp=85170822340&partnerID=8YFLogxK
U2 - 10.1109/CVPRW59228.2023.00403
DO - 10.1109/CVPRW59228.2023.00403
M3 - Conference contribution
AN - SCOPUS:85170822340
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 3891
EP - 3900
BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023
PB - IEEE Computer Society
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023
Y2 - 18 June 2023 through 22 June 2023
ER -