What People Think AI Should Infer From Faces

Severin Engelmann, Chiara Ullstein, Orestis Papakyriakopoulos, Jens Grossklags

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

17 Scopus citations

Abstract

Faces play an indispensable role in human social life. At present, computer vision artificial intelligence (AI) captures and interprets human faces for a variety of digital applications and services. The ambiguity of facial information has recently led to a debate among scholars in different fields about the types of inferences AI should make about people based on their facial looks. AI research often justifies facial AI inference-making by referring to how people form impressions in first-encounter scenarios. Critics raise concerns about bias and discrimination and warn that facial analysis AI resembles an automated version of physiognomy. What has been missing from this debate, however, is an understanding of how "non-experts"in AI ethically evaluate facial AI inference-making. In a two-scenario vignette study with 24 treatment groups, we show that non-experts (N = 3745) reject facial AI inferences such as trustworthiness and likability from portrait images in a low-stake advertising and a high-stake hiring context. In contrast, non-experts agree with facial AI inferences such as skin color or gender in the advertising but not the hiring decision context. For each AI inference, we ask non-experts to justify their evaluation in a written response. Analyzing 29, 760 written justifications, we find that non-experts are either "evidentialists"or "pragmatists": they assess the ethical status of a facial AI inference based on whether they think faces warrant sufficient or insufficient evidence for an inference (evidentialist justification) or whether making the inference results in beneficial or detrimental outcomes (pragmatist justification). Non-experts' justifications underscore the normative complexity behind facial AI inference-making. AI inferences with insufficient evidence can be rationalized by considerations of relevance while irrelevant inferences can be justified by reference to sufficient evidence. We argue that participatory approaches contribute valuable insights for the development of ethical AI in an increasingly visual data culture.

Original languageEnglish
Title of host publicationProceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
PublisherAssociation for Computing Machinery
Pages128-141
Number of pages14
ISBN (Electronic)9781450393522
DOIs
StatePublished - 21 Jun 2022
Event5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Korea, Republic of
Duration: 21 Jun 202224 Jun 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period21/06/2224/06/22

Keywords

  • artificial intelligence
  • computer vision
  • human faces
  • participatory AI ethics

Fingerprint

Dive into the research topics of 'What People Think AI Should Infer From Faces'. Together they form a unique fingerprint.

Cite this