What we talk about when we talk about trust: Theory of trust for AI in healthcare

Felix Gille, Anna Jobin, Marcello Ienca

Research output: Contribution to journalArticlepeer-review

83 Scopus citations

Abstract

Artificial intelligence (AI) is at the forefront of innovation in medicine. Researchers and AI developers have often claimed that “trust” is a critical determinant of the successful adoption of AI in medicine. Despite the pivotal role of trust and the emergence of an array of expert-informed guidelines on how to design and implement “trustworthy AI” in medicine, we found little common understanding across these guidelines on what constitutes user trust in AI and what the requirements are for its realization. In this article, we call for a conceptual framework of trust in health-related AI which is based not just on expert opinion, but first and foremost on sound empirical research and conceptual rigor. Only with a well-grounded and comprehensive understanding of the trust construct, we will be able to inform AI design and acceptance in medicine in a meaningful way.

Original languageEnglish
Article number100001
JournalIntelligence-Based Medicine
Volume1-2
DOIs
StatePublished - Nov 2020
Externally publishedYes

Keywords

  • Artificial intelligence
  • Ethics
  • Healthcare
  • Policy guidelines
  • Trust theory

Fingerprint

Dive into the research topics of 'What we talk about when we talk about trust: Theory of trust for AI in healthcare'. Together they form a unique fingerprint.

Cite this