Automatic vocalisation-based detection of fragile X syndrome and Rett syndrome

Florian B. Pokorny, Maximilian Schmitt, Mathias Egger, Katrin D. Bartl-Pokorny, Dajie Zhang, Björn W. Schuller, Peter B. Marschik

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Fragile X syndrome (FXS) and Rett syndrome (RTT) are developmental disorders currently not diagnosed before toddlerhood. Even though speech-language deficits are among the key symptoms of both conditions, little is known about infant vocalisation acoustics for an automatic earlier identification of affected individuals. To bridge this gap, we applied intelligent audio analysis methodology to a compact dataset of 4454 home-recorded vocalisations of 3 individuals with FXS and 3 individuals with RTT aged 6 to 11 months, as well as 6 age- and gender-matched typically developing controls (TD). On the basis of a standardised set of 88 acoustic features, we trained linear kernel support vector machines to evaluate the feasibility of automatic classification of (a) FXS vs TD, (b) RTT vs TD, (c) atypical development (FXS+RTT) vs TD, and (d) FXS vs RTT vs TD. In paradigms (a)–(c), all infants were correctly classified; in paradigm (d), 9 of 12 were so. Spectral/cepstral and energy-related features were most relevant for classification across all paradigms. Despite the small sample size, this study reveals new insights into early vocalisation characteristics in FXS and RTT, and provides technical underpinnings for a future earlier identification of affected individuals, enabling earlier intervention and family counselling.

Original languageEnglish
Article number13345
JournalScientific Reports
Volume12
Issue number1
DOIs
StatePublished - Dec 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'Automatic vocalisation-based detection of fragile X syndrome and Rett syndrome'. Together they form a unique fingerprint.

Cite this