TY - GEN
T1 - Exploring the Impact of Explainability on Trust and Acceptance of Conversational Agents – A Wizard of Oz Study
AU - Joshi, Rutuja
AU - Graefe, Julia
AU - Kraus, Michael
AU - Bengler, Klaus
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - With advancements in natural language processing and understanding, conversational agents (CAs) have become one of the fundamental modes of human-computer interaction. However, the black-box problem of AI algorithms often results in reduced acceptance of such systems. This calls for transparency and justification or rationale for the provided output from the users’ perspective. Explainable artificial intelligence (XAI) provides insights into the algorithms and elucidates outputs to the users, thus gaining more importance in various applications as a significant contributor to user acceptance and trust in artificial intelligence (AI) systems. This paper presents a Wizard of Oz user study with a between-subjects design comparing two versions of a vacation planning chatbot (low and high explainability) with 60 participants. The study explored the impact of explainability on users’ understanding, trust and acceptance. The results indicated that explanations (between-subject factor) significantly influence users’ understanding, trust and acceptance. According to our results, high explainability leads to increased trust and acceptance of the chatbot.
AB - With advancements in natural language processing and understanding, conversational agents (CAs) have become one of the fundamental modes of human-computer interaction. However, the black-box problem of AI algorithms often results in reduced acceptance of such systems. This calls for transparency and justification or rationale for the provided output from the users’ perspective. Explainable artificial intelligence (XAI) provides insights into the algorithms and elucidates outputs to the users, thus gaining more importance in various applications as a significant contributor to user acceptance and trust in artificial intelligence (AI) systems. This paper presents a Wizard of Oz user study with a between-subjects design comparing two versions of a vacation planning chatbot (low and high explainability) with 60 participants. The study explored the impact of explainability on users’ understanding, trust and acceptance. The results indicated that explanations (between-subject factor) significantly influence users’ understanding, trust and acceptance. According to our results, high explainability leads to increased trust and acceptance of the chatbot.
KW - Chatbots
KW - Conversational Agents
KW - Explainable AI
KW - Human-AI Interaction
KW - Human-Centered Explainable AI
UR - http://www.scopus.com/inward/record.url?scp=85196172269&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-60606-9_12
DO - 10.1007/978-3-031-60606-9_12
M3 - Conference contribution
AN - SCOPUS:85196172269
SN - 9783031606052
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 199
EP - 218
BT - Artificial Intelligence in HCI - 5th International Conference, AI-HCI 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Proceedings
A2 - Degen, Helmut
A2 - Ntoa, Stavroula
PB - Springer Science and Business Media Deutschland GmbH
T2 - 5th International Conference on Artificial Intelligence in HCI, AI-HCI 2024, held as part of the 26th HCI International Conference, HCII 2024
Y2 - 29 June 2024 through 4 July 2024
ER -