Multi-view domain-adaptive representation learning for EEG-based emotion recognition

Chao Li, Ning Bian, Ziping Zhao, Haishuai Wang, Björn W. Schuller

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Current research suggests that there exist certain limitations in EEG emotion recognition, including redundant and meaningless time-frames and channels, as well as inter- and intra-individual differences in EEG signals from different subjects. To deal with these limitations, a Cross-attention-based Dilated Causal Convolutional Neural Network with Domain Discriminator (CADD-DCCNN) for multi-view EEG-based emotion recognition is proposed to minimize individual differences and automatically learn more discriminative emotion-related features. First, differential entropy (DE) features are obtained from the raw EEG signals using short-time Fourier transform (STFT). Second, each channel of the DE features is regarded as a view, and the attention mechanisms are utilized at different views to aggregate the discriminative affective information at the level of the time-frame of EEG. Then, a dilated causal convolutional neural network is employed to distill nonlinear relationships among different time frames. Next, a feature-level fusion is used to fuse features from multiple channels, aiming to explore the potential complementary information among different views and enhance the representational ability of the feature. Finally, to minimize individual differences, a domain discriminator is employed to generate domain-invariant features, which projects data from both the different domains into the same data representation space. We evaluated our proposed method on two public datasets, SEED and DEAP. The experimental results illustrate that our CADD-DCCNN method outperforms the SOTA methods.

Original languageEnglish
Article number102156
JournalInformation Fusion
Volume104
DOIs
StatePublished - Apr 2024
Externally publishedYes

Keywords

  • Affective computing
  • Cross-attention
  • Domain adaptation
  • EEG
  • Emotion recognition
  • Multi-view learning

Fingerprint

Dive into the research topics of 'Multi-view domain-adaptive representation learning for EEG-based emotion recognition'. Together they form a unique fingerprint.

Cite this