Abstract
In this contribution1, we will discuss a prototype that allows a group of users to design sound collaboratively in real time using a multi-touch tabletop. We make use of a machine learning method to generate a mapping from perceptual audio features to synthesis parameters. This mapping is then used for visualization and interaction. Finally, we discuss the results of a comparative evaluation study.
| Original language | English |
|---|---|
| Pages (from-to) | 327-330 |
| Number of pages | 4 |
| Journal | Proceedings of the International Conference on New Interfaces for Musical Expression |
| State | Published - 2014 |
| Event | 14th International conference on New Interfaces for Musical Expression, NIME 2014 - London, United Kingdom Duration: 30 Jun 2014 → 4 Jul 2014 |
Keywords
- Collaborative Music Making
- Sound Design
Fingerprint
Dive into the research topics of 'Designing sound collaboratively-perceptually motivated audio synthesis'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver