Abstract
Controlling the timbre generated by an audio synthesizer in a goal-oriented way requires a profound understanding of the synthesizer’s manifold structural parameters. Especially shaping timbre expressively to communicate emotional affect requires expertise. Therefore, novices in particular may not be able to adequately control timbre in view of articulating the wealth of affects musically. In this context, the focus of this paper is the development of a model that can represent a relationship between timbre and an expected emotional affect1 . The results of the evaluation of the presented model are encouraging and thus support its use in steering or augmenting the control of the audio synthesis. We explicitly envision this paper as a contribution to the field of Synthesis by Analysis in the broader sense, albeit being potentially suitable to other related domains.
Original language | English |
---|---|
Pages (from-to) | 525-530 |
Number of pages | 6 |
Journal | Proceedings of the International Conference on New Interfaces for Musical Expression |
State | Published - 2013 |
Event | 13th International conference on New Interfaces for Musical Expression, NIME 2013 - Daejeon, Korea, Republic of Duration: 27 May 2013 → 30 May 2013 |
Keywords
- Analysis by Synthesis
- Deep Belief Networks
- Emotional affect
- Machine Learning
- Timbre