Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition
- URL: http://arxiv.org/abs/2310.14165v1
- Date: Sun, 22 Oct 2023 03:47:11 GMT
- Title: Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition
- Authors: Hongxiang Gao, Xiangyao Wang, Zhenghua Chen, Min Wu, Zhipeng Cai, Lulu
Zhao, Jianqing Li and Chengyu Liu
- Abstract summary: This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
- Score: 20.655367200006076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic emotion recognition based on multichannel Electroencephalography
(EEG) holds great potential in advancing human-computer interaction. However,
several significant challenges persist in existing research on algorithmic
emotion recognition. These challenges include the need for a robust model to
effectively learn discriminative node attributes over long paths, the
exploration of ambiguous topological information in EEG channels and effective
frequency bands, and the mapping between intrinsic data qualities and provided
labels. To address these challenges, this study introduces the
distribution-based uncertainty method to represent spatial dependencies and
temporal-spectral relativeness in EEG signals based on Graph Convolutional
Network (GCN) architecture that adaptively assigns weights to functional
aggregate node features, enabling effective long-path capturing while
mitigating over-smoothing phenomena. Moreover, the graph mixup technique is
employed to enhance latent connected edges and mitigate noisy label issues.
Furthermore, we integrate the uncertainty learning method with deep GCN weights
in a one-way learning fashion, termed Connectivity Uncertainty GCN (CU-GCN). We
evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for
emotion recognition tasks. The experimental results demonstrate the superiority
of our methodology over previous methods, yielding positive and significant
improvements. Ablation studies confirm the substantial contributions of each
component to the overall performance.
Related papers
- MEEG and AT-DGNN: Improving EEG Emotion Recognition with Music Introducing and Graph-based Learning [3.840859750115109]
We present the MEEG dataset, a multi-modal collection of music-induced electroencephalogram (EEG) recordings.
We introduce the Attention-based Temporal Learner with Dynamic Graph Neural Network (AT-DGNN), a novel framework for EEG-based emotion recognition.
arXiv Detail & Related papers (2024-07-08T01:58:48Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - EEG-based Cross-Subject Driver Drowsiness Recognition with an
Interpretable Convolutional Neural Network [0.0]
We develop a novel convolutional neural network combined with an interpretation technique that allows sample-wise analysis of important features for classification.
Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject recognition.
arXiv Detail & Related papers (2021-05-30T14:47:20Z) - SFE-Net: EEG-based Emotion Recognition with Symmetrical Spatial Feature
Extraction [1.8047694351309205]
We present a spatial folding ensemble network (SFENet) for EEG feature extraction and emotion recognition.
Motivated by the spatial symmetry mechanism of human brain, we fold the input EEG channel data with five different symmetrical strategies.
With this network, the spatial features of different symmetric folding signlas can be extracted simultaneously, which greatly improves the robustness and accuracy of feature recognition.
arXiv Detail & Related papers (2021-04-09T12:59:38Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding
Time-resolved EEG Motor Imagery Signals [8.19994663278877]
A novel deep learning framework based on the graph convolutional neural networks (GCNs) is presented to enhance the decoding performance of raw EEG signals.
The introduced approach has been shown to converge for both personalized and group-wise predictions.
arXiv Detail & Related papers (2020-06-16T04:57:12Z) - Investigating EEG-Based Functional Connectivity Patterns for Multimodal
Emotion Recognition [8.356765961526955]
We investigate three functional connectivity network features: strength, clustering, coefficient and eigenvector centrality.
The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public EEG datasets.
We construct a multimodal emotion recognition model by combining the functional connectivity features from EEG and the features from eye movements or physiological signals.
arXiv Detail & Related papers (2020-04-04T16:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.