Augmenting affect from speech with generative music

Gerhard Johann Hagerer, Michael Lux, Stefan Ehrlich, Gordon Cheng

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

6 Zitate (Scopus)

Abstract

In this work we propose a prototype to improve interpersonal communication of emotions. Therefore music is generated with the same affect as when humans talk on the y. Emotions in speech are detected and conveyed to music according to music psychological rules. Existing evaluated modules from affective generative music and speech emotion detection, use cases, emotional models and projected evaluations are discussed.

OriginalspracheEnglisch
TitelCHI 2015 - Extended Abstracts Publication of the 33rd Annual CHI Conference on Human Factors in Computing Systems
UntertitelCrossings
Herausgeber (Verlag)Association for Computing Machinery
Seiten977-982
Seitenumfang6
ISBN (elektronisch)9781450331463
DOIs
PublikationsstatusVeröffentlicht - 18 Apr. 2015
Veranstaltung33rd Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2015 - Seoul, Südkorea
Dauer: 18 Apr. 201523 Apr. 2015

Publikationsreihe

NameConference on Human Factors in Computing Systems - Proceedings
Band18

Konferenz

Konferenz33rd Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2015
Land/GebietSüdkorea
OrtSeoul
Zeitraum18/04/1523/04/15

Fingerprint

Untersuchen Sie die Forschungsthemen von „Augmenting affect from speech with generative music“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren