Abstract
Current research suggests that there exist certain limitations in EEG emotion recognition, including redundant and meaningless time-frames and channels, as well as inter- and intra-individual differences in EEG signals from different subjects. To deal with these limitations, a Cross-attention-based Dilated Causal Convolutional Neural Network with Domain Discriminator (CADD-DCCNN) for multi-view EEG-based emotion recognition is proposed to minimize individual differences and automatically learn more discriminative emotion-related features. First, differential entropy (DE) features are obtained from the raw EEG signals using short-time Fourier transform (STFT). Second, each channel of the DE features is regarded as a view, and the attention mechanisms are utilized at different views to aggregate the discriminative affective information at the level of the time-frame of EEG. Then, a dilated causal convolutional neural network is employed to distill nonlinear relationships among different time frames. Next, a feature-level fusion is used to fuse features from multiple channels, aiming to explore the potential complementary information among different views and enhance the representational ability of the feature. Finally, to minimize individual differences, a domain discriminator is employed to generate domain-invariant features, which projects data from both the different domains into the same data representation space. We evaluated our proposed method on two public datasets, SEED and DEAP. The experimental results illustrate that our CADD-DCCNN method outperforms the SOTA methods.
Original language | English |
---|---|
Article number | 102156 |
Journal | Information Fusion |
Volume | 104 |
DOIs | |
State | Published - Apr 2024 |
Externally published | Yes |
Keywords
- Affective computing
- Cross-attention
- Domain adaptation
- EEG
- Emotion recognition
- Multi-view learning