MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
- URL: http://arxiv.org/abs/2407.03131v4
- Date: Thu, 16 Jan 2025 02:54:35 GMT
- Title: MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
- Authors: Yanjie Cui, Xiaohong Liu, Jing Liang, Yamin Fu,
- Abstract summary: We introduce a multi-view graph transformer (MVGT) based on spatial relations that integrates information across three domains.<n> evaluation on publicly available datasets demonstrates that MVGT surpasses state-of-the-art methods in performance.
- Score: 4.184462746475896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electroencephalography (EEG), a technique that records electrical activity from the scalp using electrodes, plays a vital role in affective computing. However, fully utilizing the multi-domain characteristics of EEG signals remains a significant challenge. Traditional single-perspective analyses often fail to capture the complex interplay of temporal, frequency, and spatial dimensions in EEG data. To address this, we introduce a multi-view graph transformer (MVGT) based on spatial relations that integrates information across three domains: temporal dynamics from continuous series, frequency features extracted from frequency bands, and inter-channel relationships captured through several spatial encodings. This comprehensive approach allows model to capture the nuanced properties inherent in EEG signals, enhancing its flexibility and representational power. Evaluation on publicly available datasets demonstrates that MVGT surpasses state-of-the-art methods in performance. The results highlight its ability to extract multi-domain information and effectively model inter-channel relationships, showcasing its potential for EEG-based emotion recognition tasks.
Related papers
- CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.
Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.
The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - DuA: Dual Attentive Transformer in Long-Term Continuous EEG Emotion Analysis [15.858955204180907]
We propose a Dual Attentive (DuA) transformer framework for long-term continuous EEG emotion analysis.
Unlike segment-based approaches, the DuA transformer processes an entire EEG trial as a whole, identifying emotions at the trial level.
This framework is designed to adapt to varying signal lengths, providing a substantial advantage over traditional methods.
arXiv Detail & Related papers (2024-07-30T03:31:03Z) - MEEG and AT-DGNN: Improving EEG Emotion Recognition with Music Introducing and Graph-based Learning [3.840859750115109]
We present the MEEG dataset, a multi-modal collection of music-induced electroencephalogram (EEG) recordings.
We introduce the Attention-based Temporal Learner with Dynamic Graph Neural Network (AT-DGNN), a novel framework for EEG-based emotion recognition.
arXiv Detail & Related papers (2024-07-08T01:58:48Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Dynamic GNNs for Precise Seizure Detection and Classification from EEG Data [6.401370088497331]
This paper introduces NeuroGNN, a dynamic Graph Neural Network (GNN) framework that captures the interplay between the EEG locations and the semantics of their corresponding brain regions.
Our experiments with real-world data demonstrate that NeuroGNN significantly outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2024-05-08T21:36:49Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition [0.5439020425819]
We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
arXiv Detail & Related papers (2024-01-04T16:38:47Z) - Learning Robust Deep Visual Representations from EEG Brain Recordings [13.768240137063428]
This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations.
We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures.
We propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation.
arXiv Detail & Related papers (2023-10-25T10:26:07Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - DAT++: Spatially Dynamic Vision Transformer with Deformable Attention [87.41016963608067]
We present Deformable Attention Transformer ( DAT++), a vision backbone efficient and effective for visual recognition.
DAT++ achieves state-of-the-art results on various visual recognition benchmarks, with 85.9% ImageNet accuracy, 54.5 and 47.0 MS-COCO instance segmentation mAP, and 51.5 ADE20K semantic segmentation mIoU.
arXiv Detail & Related papers (2023-09-04T08:26:47Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Data augmentation for learning predictive models on EEG: a systematic
comparison [79.84079335042456]
deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years.
Deep learning for EEG classification tasks has been limited by the relatively small size of EEG datasets.
Data augmentation has been a key ingredient to obtain state-of-the-art performances across applications such as computer vision or speech.
arXiv Detail & Related papers (2022-06-29T09:18:15Z) - Spatio-Temporal Analysis of Transformer based Architecture for Attention
Estimation from EEG [2.7076510056452654]
We present a novel framework allowing us to retrieve the attention state, i.e degree of attention given to a specific task, from EEG signals.
While previous methods often consider the spatial relationship in EEG through electrodes, we propose here to also exploit the spatial and temporal information with a transformer-based network.
The proposed network has been trained and validated on two public datasets and achieves higher results compared to state-of-the-art models.
arXiv Detail & Related papers (2022-04-04T08:05:33Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - EEG-ConvTransformer for Single-Trial EEG based Visual Stimuli
Classification [5.076419064097734]
This work introduces an EEG-ConvTranformer network that is based on multi-headed self-attention.
It achieves improved classification accuracy over the state-of-the-art techniques across five different visual stimuli classification tasks.
arXiv Detail & Related papers (2021-07-08T17:22:04Z) - SFE-Net: EEG-based Emotion Recognition with Symmetrical Spatial Feature
Extraction [1.8047694351309205]
We present a spatial folding ensemble network (SFENet) for EEG feature extraction and emotion recognition.
Motivated by the spatial symmetry mechanism of human brain, we fold the input EEG channel data with five different symmetrical strategies.
With this network, the spatial features of different symmetric folding signlas can be extracted simultaneously, which greatly improves the robustness and accuracy of feature recognition.
arXiv Detail & Related papers (2021-04-09T12:59:38Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.