Towards mapping timbre to emotional affect

Niklas Klügel, Georg Groh

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Controlling the timbre generated by an audio synthesizer in a goal-oriented way requires a profound understanding of the synthesizer’s manifold structural parameters. Especially shaping timbre expressively to communicate emotional affect requires expertise. Therefore, novices in particular may not be able to adequately control timbre in view of articulating the wealth of affects musically. In this context, the focus of this paper is the development of a model that can represent a relationship between timbre and an expected emotional affect1 . The results of the evaluation of the presented model are encouraging and thus support its use in steering or augmenting the control of the audio synthesis. We explicitly envision this paper as a contribution to the field of Synthesis by Analysis in the broader sense, albeit being potentially suitable to other related domains.

Original languageEnglish
Pages (from-to)525-530
Number of pages6
JournalProceedings of the International Conference on New Interfaces for Musical Expression
StatePublished - 2013
Event13th International conference on New Interfaces for Musical Expression, NIME 2013 - Daejeon, Korea, Republic of
Duration: 27 May 201330 May 2013

Keywords

  • Analysis by Synthesis
  • Deep Belief Networks
  • Emotional affect
  • Machine Learning
  • Timbre

Fingerprint

Dive into the research topics of 'Towards mapping timbre to emotional affect'. Together they form a unique fingerprint.

Cite this