Improving EEG-based Emotion Recognition by Fusing Time-frequency And
Spatial Representations
- URL: http://arxiv.org/abs/2303.11421v1
- Date: Tue, 14 Mar 2023 07:26:51 GMT
- Title: Improving EEG-based Emotion Recognition by Fusing Time-frequency And
Spatial Representations
- Authors: Kexin Zhu, Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
- Abstract summary: We propose a classification network based on the cross-domain feature fusion method.
We also propose a two-step fusion method and apply these methods to the EEG emotion recognition network.
Experimental results show that our proposed network, which combines multiple representations in the time-frequency domain and spatial domain, outperforms previous methods on public datasets.
- Score: 29.962519978925236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using deep learning methods to classify EEG signals can accurately identify
people's emotions. However, existing studies have rarely considered the
application of the information in another domain's representations to feature
selection in the time-frequency domain. We propose a classification network of
EEG signals based on the cross-domain feature fusion method, which makes the
network more focused on the features most related to brain activities and
thinking changes by using the multi-domain attention mechanism. In addition, we
propose a two-step fusion method and apply these methods to the EEG emotion
recognition network. Experimental results show that our proposed network, which
combines multiple representations in the time-frequency domain and spatial
domain, outperforms previous methods on public datasets and achieves
state-of-the-art at present.
Related papers
- Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition [53.359383163184425]
We introduce a novel multimodality synergistic knowledge distillation scheme tailored for efficient single-eye motion recognition tasks.
This method allows a lightweight, unimodal student spiking neural network (SNN) to extract rich knowledge from an event-frame multimodal teacher network.
arXiv Detail & Related papers (2024-06-20T07:24:47Z) - Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - EEG-based Emotion Style Transfer Network for Cross-dataset Emotion
Recognition [45.26847258736848]
We propose an EEG-based Emotion Style Transfer Network (E2STN) to obtain EEG representations that contain the content information of source domain and the style information of target domain.
The E2STN can achieve the state-of-the-art performance on cross-dataset EEG emotion recognition tasks.
arXiv Detail & Related papers (2023-08-09T16:54:40Z) - MSA-GCN:Multiscale Adaptive Graph Convolution Network for Gait Emotion
Recognition [6.108523790270448]
We present a novel Multi Scale Adaptive Graph Convolution Network (MSA-GCN) to recognize emotions.
In our model, a adaptive selective spatial-temporal convolution is designed to select the convolution kernel dynamically to obtain the soft-temporal features of different emotions.
Compared with previous state-of-the-art methods, the proposed method achieves the best performance on two public datasets.
arXiv Detail & Related papers (2022-09-19T13:07:16Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Group Gated Fusion on Attention-based Bidirectional Alignment for
Multimodal Emotion Recognition [63.07844685982738]
This paper presents a new model named as Gated Bidirectional Alignment Network (GBAN), which consists of an attention-based bidirectional alignment network over LSTM hidden states.
We empirically show that the attention-aligned representations outperform the last-hidden-states of LSTM significantly.
The proposed GBAN model outperforms existing state-of-the-art multimodal approaches on the IEMOCAP dataset.
arXiv Detail & Related papers (2022-01-17T09:46:59Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Multi-Scale Neural network for EEG Representation Learning in BCI [2.105172041656126]
We propose a novel deep multi-scale neural network that discovers feature representations in multiple frequency/time ranges.
By representing EEG signals withspectral-temporal information, the proposed method can be utilized for diverse paradigms.
arXiv Detail & Related papers (2020-03-02T04:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.