MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
- URL: http://arxiv.org/abs/2407.03131v4
- Date: Thu, 16 Jan 2025 02:54:35 GMT
- Title: MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
- Authors: Yanjie Cui, Xiaohong Liu, Jing Liang, Yamin Fu,
- Abstract summary: We introduce a multi-view graph transformer (MVGT) based on spatial relations that integrates information across three domains.
evaluation on publicly available datasets demonstrates that MVGT surpasses state-of-the-art methods in performance.
- Score: 4.184462746475896
- License:
- Abstract: Electroencephalography (EEG), a technique that records electrical activity from the scalp using electrodes, plays a vital role in affective computing. However, fully utilizing the multi-domain characteristics of EEG signals remains a significant challenge. Traditional single-perspective analyses often fail to capture the complex interplay of temporal, frequency, and spatial dimensions in EEG data. To address this, we introduce a multi-view graph transformer (MVGT) based on spatial relations that integrates information across three domains: temporal dynamics from continuous series, frequency features extracted from frequency bands, and inter-channel relationships captured through several spatial encodings. This comprehensive approach allows model to capture the nuanced properties inherent in EEG signals, enhancing its flexibility and representational power. Evaluation on publicly available datasets demonstrates that MVGT surpasses state-of-the-art methods in performance. The results highlight its ability to extract multi-domain information and effectively model inter-channel relationships, showcasing its potential for EEG-based emotion recognition tasks.
Related papers
- CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.
Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.
The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - DuA: Dual Attentive Transformer in Long-Term Continuous EEG Emotion Analysis [15.858955204180907]
We propose a Dual Attentive (DuA) transformer framework for long-term continuous EEG emotion analysis.
Unlike segment-based approaches, the DuA transformer processes an entire EEG trial as a whole, identifying emotions at the trial level.
This framework is designed to adapt to varying signal lengths, providing a substantial advantage over traditional methods.
arXiv Detail & Related papers (2024-07-30T03:31:03Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition [0.5439020425819]
We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
arXiv Detail & Related papers (2024-01-04T16:38:47Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - DAT++: Spatially Dynamic Vision Transformer with Deformable Attention [87.41016963608067]
We present Deformable Attention Transformer ( DAT++), a vision backbone efficient and effective for visual recognition.
DAT++ achieves state-of-the-art results on various visual recognition benchmarks, with 85.9% ImageNet accuracy, 54.5 and 47.0 MS-COCO instance segmentation mAP, and 51.5 ADE20K semantic segmentation mIoU.
arXiv Detail & Related papers (2023-09-04T08:26:47Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - EEG-ConvTransformer for Single-Trial EEG based Visual Stimuli
Classification [5.076419064097734]
This work introduces an EEG-ConvTranformer network that is based on multi-headed self-attention.
It achieves improved classification accuracy over the state-of-the-art techniques across five different visual stimuli classification tasks.
arXiv Detail & Related papers (2021-07-08T17:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.