TY - GEN
T1 - Human Centered Explainability for Intelligent Vehicles - A User Study
AU - Graefe, Julia
AU - Paden, Selma
AU - Engelhardt, Doreen
AU - Bengler, Klaus
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/9/17
Y1 - 2022/9/17
N2 - Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday products. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role ensuring that drivers understand the AI system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.
AB - Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday products. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role ensuring that drivers understand the AI system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.
KW - Human-AI interaction
KW - explainable AI
KW - intelligent vehicles
KW - user studies
KW - user-adaptive
UR - http://www.scopus.com/inward/record.url?scp=85139493719&partnerID=8YFLogxK
U2 - 10.1145/3543174.3546846
DO - 10.1145/3543174.3546846
M3 - Conference contribution
AN - SCOPUS:85139493719
T3 - Main Proceedings - 14th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2022
SP - 297
EP - 306
BT - Main Proceedings - 14th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2022
PB - Association for Computing Machinery, Inc
T2 - 14th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2022
Y2 - 17 September 2022 through 20 September 2022
ER -