Facial Feature Enhancement for Immersive Real-Time Avatar-Based Sign Language Communication Using Personalized CNNs

Kristoffer Waldow, Arnulph Fuhrmann, Daniel Roth

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Facial recognition is crucial in sign language communication. Especially for virtual reality and avatar-based communication, increased facial features have the potential to integrate the deaf and hard-of-hearing community to improve speech comprehension and empathy. But, current methods lack precision in capturing nuanced expressions. To address this, we present a real-time solution that utilizes personalized Convolutional Neural Networks (CNNs) to capture in-tricate facial details, such as tongue movement and individual puffed cheeks. Our system's classification models offer easy expansion and integration into existing facial recognition systems via UDP network broadcasting.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages919-920
Number of pages2
ISBN (Electronic)9798350374490
DOIs
StatePublished - 2024
Externally publishedYes
Event2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024 - Orlando, United States
Duration: 16 Mar 202421 Mar 2024

Publication series

NameProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024

Conference

Conference2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024
Country/TerritoryUnited States
CityOrlando
Period16/03/2421/03/24

Keywords

  • Accessibility technologies; [Computing methodologies]: Machine learning
  • Machine learning approaches
  • [Human-centered computing]: Accessibility

Fingerprint

Dive into the research topics of 'Facial Feature Enhancement for Immersive Real-Time Avatar-Based Sign Language Communication Using Personalized CNNs'. Together they form a unique fingerprint.

Cite this