TY - JOUR
T1 - Detecting somatisation disorder via speech
T2 - introducing the Shenzhen Somatisation Speech Corpus
AU - Qian, Kun
AU - Huang, Ruolan
AU - Bao, Zhihao
AU - Tan, Yang
AU - Zhao, Zhonghao
AU - Sun, Mengkai
AU - Hu, Bin
AU - Schuller, Björn W.
AU - Yamamoto, Yoshiharu
N1 - Publisher Copyright:
© 2023
PY - 2024/5
Y1 - 2024/5
N2 - Objective: Speech recognition technology is widely used as a mature technical approach in many fields. In the study of depression recognition, speech signals are commonly used due to their convenience and ease of acquisition. Though speech recognition is popular in the research field of depression recognition, it has been little studied in somatisation disorder recognition. The reason for this is the lack of a publicly accessible database of relevant speech and benchmark studies. To this end, we introduced our somatisation disorder speech database and gave benchmark results. Methods: By collecting speech samples of somatisation disorder patients, in cooperation with the Shenzhen University General Hospital, we introduced our somatisation disorder speech database, the Shenzhen Somatisation Speech Corpus (SSSC). Moreover, a benchmark for SSSC using classic acoustic features and a machine learning model was proposed in our work. Results: To obtain a more scientific benchmark, we compared and analysed the performance of different acoustic features, i. e., the full ComPare feature set, or only Mel frequency cepstral coefficients (MFCCs), fundamental frequency (F0), and frequency and bandwidth of the formants (F1-F3). By comparison, the best result of our benchmark was the 76.0% unweighted average recall achieved by a support vector machine with formants F1–F3. Conclusion: The proposal of SSSC may bridge a research gap in somatisation disorder, providing researchers with a publicly accessible speech database. In addition, the results of the benchmark could show the scientific validity and feasibility of computer audition for speech recognition in somatization disorders.
AB - Objective: Speech recognition technology is widely used as a mature technical approach in many fields. In the study of depression recognition, speech signals are commonly used due to their convenience and ease of acquisition. Though speech recognition is popular in the research field of depression recognition, it has been little studied in somatisation disorder recognition. The reason for this is the lack of a publicly accessible database of relevant speech and benchmark studies. To this end, we introduced our somatisation disorder speech database and gave benchmark results. Methods: By collecting speech samples of somatisation disorder patients, in cooperation with the Shenzhen University General Hospital, we introduced our somatisation disorder speech database, the Shenzhen Somatisation Speech Corpus (SSSC). Moreover, a benchmark for SSSC using classic acoustic features and a machine learning model was proposed in our work. Results: To obtain a more scientific benchmark, we compared and analysed the performance of different acoustic features, i. e., the full ComPare feature set, or only Mel frequency cepstral coefficients (MFCCs), fundamental frequency (F0), and frequency and bandwidth of the formants (F1-F3). By comparison, the best result of our benchmark was the 76.0% unweighted average recall achieved by a support vector machine with formants F1–F3. Conclusion: The proposal of SSSC may bridge a research gap in somatisation disorder, providing researchers with a publicly accessible speech database. In addition, the results of the benchmark could show the scientific validity and feasibility of computer audition for speech recognition in somatization disorders.
KW - Computer audition
KW - Healthcare
KW - Machine learning
KW - Somatisation disorder
UR - http://www.scopus.com/inward/record.url?scp=85195095849&partnerID=8YFLogxK
U2 - 10.1016/j.imed.2023.03.001
DO - 10.1016/j.imed.2023.03.001
M3 - Article
AN - SCOPUS:85195095849
SN - 2096-9376
VL - 4
SP - 96
EP - 103
JO - Intelligent Medicine
JF - Intelligent Medicine
IS - 2
ER -