Augmenting affect from speech with generative music

Gerhard Johann Hagerer, Michael Lux, Stefan Ehrlich, Gordon Cheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

In this work we propose a prototype to improve interpersonal communication of emotions. Therefore music is generated with the same affect as when humans talk on the y. Emotions in speech are detected and conveyed to music according to music psychological rules. Existing evaluated modules from affective generative music and speech emotion detection, use cases, emotional models and projected evaluations are discussed.

Original languageEnglish
Title of host publicationCHI 2015 - Extended Abstracts Publication of the 33rd Annual CHI Conference on Human Factors in Computing Systems
Subtitle of host publicationCrossings
PublisherAssociation for Computing Machinery
Pages977-982
Number of pages6
ISBN (Electronic)9781450331463
DOIs
StatePublished - 18 Apr 2015
Event33rd Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2015 - Seoul, Korea, Republic of
Duration: 18 Apr 201523 Apr 2015

Publication series

NameConference on Human Factors in Computing Systems - Proceedings
Volume18

Conference

Conference33rd Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2015
Country/TerritoryKorea, Republic of
CitySeoul
Period18/04/1523/04/15

Keywords

  • Affective computing
  • Circumplex model
  • Emotion recognition
  • Generative music
  • Speech analysis

Fingerprint

Dive into the research topics of 'Augmenting affect from speech with generative music'. Together they form a unique fingerprint.

Cite this