Abstract
Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday systems. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role in ensuring that drivers understand the AI system’s functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.
Original language | English |
---|---|
Pages (from-to) | 3237-3253 |
Number of pages | 17 |
Journal | International Journal of Human-Computer Interaction |
Volume | 39 |
Issue number | 16 |
DOIs | |
State | Published - 2023 |
Keywords
- Human–AI interaction
- explainable AI
- intelligent vehicles
- user studies
- user-adaptive