TY - JOUR
T1 - Towards Human-Centered Explainable AI
T2 - A Survey of User Studies for Model Explanations
AU - Rong, Yao
AU - Leemann, Tobias
AU - Nguyen, Thai Trang
AU - Fiedler, Lisa
AU - Qian, Peizhu
AU - Unhelkar, Vaibhav
AU - Seidel, Tina
AU - Kasneci, Gjergji
AU - Kasneci, Enkelejda
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2024/4/1
Y1 - 2024/4/1
N2 - Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
AB - Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
KW - Explainable AI (XAI)
KW - explainable ML
KW - human-AI interaction
KW - human-centered XAI
KW - user study
UR - http://www.scopus.com/inward/record.url?scp=85177051708&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2023.3331846
DO - 10.1109/TPAMI.2023.3331846
M3 - Article
AN - SCOPUS:85177051708
SN - 0162-8828
VL - 46
SP - 2104
EP - 2122
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 4
ER -