Human-Centered Explainability for Intelligent Vehicles—A User Study

Julia Graefe, Selma Paden, Doreen Engelhardt, Klaus Bengler

Research output: Contribution to journalArticlepeer-review

Abstract

Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday systems. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role in ensuring that drivers understand the AI system’s functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.

Original languageEnglish
Pages (from-to)3237-3253
Number of pages17
JournalInternational Journal of Human-Computer Interaction
Volume39
Issue number16
DOIs
StatePublished - 2023

Keywords

  • Human–AI interaction
  • explainable AI
  • intelligent vehicles
  • user studies
  • user-adaptive

Fingerprint

Dive into the research topics of 'Human-Centered Explainability for Intelligent Vehicles—A User Study'. Together they form a unique fingerprint.

Cite this