'Mister D.j., Cheer me Up!': Musical and textual features for automatic mood classification

Björn Schuller, Clemens Hagel, Dagmar Schuller, Gerhard Rigoll

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

Mass consumption of large collections of digital music asks for efficient and intuitive ways of organization. In this article, a system is presented which recognizes the evoked music mood on the basis of a wide variety of features, closely sticking to real world conditions. A twodimensional mood model is discussed in which moods resemble binary values for arousal and valence and an easy and thus user friendly method is presented through which a fuzzy seven-class mood cluster is deducted. The songs of the 'Twenty Years of MTV Europe Most Wanted' music database consisting of recorded pop music tracks serve for evaluation of three groups of features: firstly, traditional features such as rhythm and tonal features, zero crossing rate, cepstral, and MPEG-7 Low Level Descriptors for audio content are extracted. Secondly, lyrics, chord sequences, and genre data are obtained from on-line sources. Thirdly, from all these, the high-level features, musical mode, and as a novel feature, the suited ballroom dance style, are created automatically. The features selected are data-driven, and Support Vector Machines are used for classification. Prediction accuracies of 77.4% for arousal and 72.9% for valence as well as 71.8% (including neighbours) for the seven-class cluster model are obtained pre-serving realism in terms of non-prototypical music selection and feature extraction throughout.

Original languageEnglish
Pages (from-to)13-34
Number of pages22
JournalJournal of New Music Research
Volume39
Issue number1
DOIs
StatePublished - Mar 2010

Fingerprint

Dive into the research topics of ''Mister D.j., Cheer me Up!': Musical and textual features for automatic mood classification'. Together they form a unique fingerprint.

Cite this