Can You Tell Real from Fake Face Images? Perception of Computer-Generated Faces by Humans

Efe Bozkir, Clara Riedmiller, Athanassios N. Skodras, Gjergji Kasneci, Enkelejda Kasneci

Research output: Contribution to journalArticlepeer-review

Abstract

With recent advances in machine learning and big data, it is now possible to create synthetic images that look real. Face generation is often of particular interest, as faces can be used for various purposes. However, improper use of such content can lead to the dissemination of false information, such as fake news, and thus pose a threat to society. This work studies whether people believe the truthfulness of faces using eye tracking and self-reports, including free-form textual explanations when participants encounter real and computer-generated faces. We used three different datasets for our evaluations, and our experimental results show that while people are relatively better at identifying the truthfulness of real faces and faces generated by earlier machine learning algorithms with different gazing behaviors in viewing and rating phases, they perform less accurately when deciding the truthfulness of synthetic face images that are generated by newer algorithms. Our findings provide important insights for society and policymakers.

Original languageEnglish
Article number6
JournalACM Transactions on Applied Perception
Volume22
Issue number2
DOIs
StatePublished - 30 Nov 2024

Keywords

  • Computer-generated images
  • Eye movements
  • Eye tracking
  • Generative adversarial networks
  • Human behavior
  • Human-AI interaction

Fingerprint

Dive into the research topics of 'Can You Tell Real from Fake Face Images? Perception of Computer-Generated Faces by Humans'. Together they form a unique fingerprint.

Cite this