TY - GEN
T1 - Explainable Model-Agnostic Similarity and Confidence in Face Verification
AU - Knoche, Martin
AU - Teepe, Torben
AU - Hormann, Stefan
AU - Rigoll, Gerhard
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Recently, face recognition systems have demonstrated remarkable performances and thus gained a vital role in our daily life. They already surpass human face verification accountability in many scenarios. However, they lack explanations for their predictions. Compared to human operators, typical face recognition network system generate only binary decisions without further explanation and insights into those decisions. This work focuses on explanations for face recognition systems, vital for developers and operators. First, we introduce a confidence score for those systems based on facial feature distances between two input images and the distribution of distances across a dataset. Secondly, we establish a novel visualization approach to obtain more meaningful predictions from a face recognition system, which maps the distance deviation based on a systematic occlusion of images. The result is blended with the original images and highlights similar and dissimilar facial regions. Lastly, we calculate confidence scores and explanation maps for several state-of-the-art face verification datasets and release the results on a web platform. We optimize the platform for a user-friendly interaction and hope to further improve the understanding of machine learning decisions. The source code is available on GitHub11https://github.com/martlgap/x-face-verification, and the web platform is publicly available at http://explainable-face-verification.ey.r.appspot.com.
AB - Recently, face recognition systems have demonstrated remarkable performances and thus gained a vital role in our daily life. They already surpass human face verification accountability in many scenarios. However, they lack explanations for their predictions. Compared to human operators, typical face recognition network system generate only binary decisions without further explanation and insights into those decisions. This work focuses on explanations for face recognition systems, vital for developers and operators. First, we introduce a confidence score for those systems based on facial feature distances between two input images and the distribution of distances across a dataset. Secondly, we establish a novel visualization approach to obtain more meaningful predictions from a face recognition system, which maps the distance deviation based on a systematic occlusion of images. The result is blended with the original images and highlights similar and dissimilar facial regions. Lastly, we calculate confidence scores and explanation maps for several state-of-the-art face verification datasets and release the results on a web platform. We optimize the platform for a user-friendly interaction and hope to further improve the understanding of machine learning decisions. The source code is available on GitHub11https://github.com/martlgap/x-face-verification, and the web platform is publicly available at http://explainable-face-verification.ey.r.appspot.com.
UR - http://www.scopus.com/inward/record.url?scp=85148334746&partnerID=8YFLogxK
U2 - 10.1109/WACVW58289.2023.00078
DO - 10.1109/WACVW58289.2023.00078
M3 - Conference contribution
AN - SCOPUS:85148334746
T3 - Proceedings - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2023
SP - 711
EP - 718
BT - Proceedings - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2023
Y2 - 3 January 2023 through 7 January 2023
ER -