EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition
- URL: http://arxiv.org/abs/2406.18345v1
- Date: Wed, 26 Jun 2024 13:42:11 GMT
- Title: EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition
- Authors: Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin Lim Jun Liang, Cuntai Guan,
- Abstract summary: We introduce a novel transformer model called emotion transformer (EmT)
EmT is designed to excel in both generalized cross-subject EEG emotion classification and regression tasks.
Experiments on four publicly available datasets show that EmT achieves higher results than the baseline methods for both EEG emotion classification and regression tasks.
- Score: 11.027908624804535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating prior knowledge of neurophysiology into neural network architecture enhances the performance of emotion decoding. While numerous techniques emphasize learning spatial and short-term temporal patterns, there has been limited emphasis on capturing the vital long-term contextual information associated with emotional cognitive processes. In order to address this discrepancy, we introduce a novel transformer model called emotion transformer (EmT). EmT is designed to excel in both generalized cross-subject EEG emotion classification and regression tasks. In EmT, EEG signals are transformed into a temporal graph format, creating a sequence of EEG feature graphs using a temporal graph construction module (TGC). A novel residual multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph representations for each EEG feature graph within the series, and the learned representations of each graph are fused into one token. Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information. Finally, the task-specific output module (TSO) generates the desired outputs. Experiments on four publicly available datasets show that EmT achieves higher results than the baseline methods for both EEG emotion classification and regression tasks. The code is available at https://github.com/yi-ding-cs/EmT.
Related papers
- Automatic Graph Topology-Aware Transformer [50.2807041149784]
We build a comprehensive graph Transformer search space with the micro-level and macro-level designs.
EGTAS evolves graph Transformer topologies at the macro level and graph-aware strategies at the micro level.
We demonstrate the efficacy of EGTAS across a range of graph-level and node-level tasks.
arXiv Detail & Related papers (2024-05-30T07:44:31Z) - From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos [88.08209394979178]
Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations.
We introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features.
arXiv Detail & Related papers (2023-12-09T03:16:09Z) - MASA-TCN: Multi-anchor Space-aware Temporal Convolutional Neural
Networks for Continuous and Discrete EEG Emotion Recognition [11.882642356358883]
We propose a novel model, named MASA-TCN, for EEG emotion regression and classification tasks.
The space-aware temporal layer enables TCN to additionally learn from spatial relations among EEG electrodes.
Experiments show MASA-TCN achieves higher results than the state-of-the-art methods for both EEG emotion regression and classification tasks.
arXiv Detail & Related papers (2023-08-30T04:49:24Z) - A Hybrid End-to-End Spatio-Temporal Attention Neural Network with
Graph-Smooth Signals for EEG Emotion Recognition [1.6328866317851187]
We introduce a deep neural network that acquires interpretable representations by a hybrid structure of network-temporal encoding and recurrent attention blocks.
We demonstrate that our proposed architecture exceeds state-of-the-art results for emotion classification on the publicly available DEAP dataset.
arXiv Detail & Related papers (2023-07-06T15:35:14Z) - TransformerG2G: Adaptive time-stepping for learning temporal graph
embeddings using transformers [2.2120851074630177]
We develop a graph embedding model with uncertainty quantification, TransformerG2G, to learn temporal dynamics of temporal graphs.
Our experiments demonstrate that the proposed TransformerG2G model outperforms conventional multi-step methods.
By examining the attention weights, we can uncover temporal dependencies, identify influential elements, and gain insights into the complex interactions within the graph structure.
arXiv Detail & Related papers (2023-07-05T18:34:22Z) - Graph Decision Transformer [83.76329715043205]
Graph Decision Transformer (GDT) is a novel offline reinforcement learning approach.
GDT models the input sequence into a causal graph to capture potential dependencies between fundamentally different concepts.
Our experiments show that GDT matches or surpasses the performance of state-of-the-art offline RL methods on image-based Atari and OpenAI Gym.
arXiv Detail & Related papers (2023-03-07T09:10:34Z) - Transformer-Based Self-Supervised Learning for Emotion Recognition [0.0]
We propose to use a Transformer-based model to process electrocardiograms (ECG) for emotion recognition.
To overcome the relatively small size of datasets with emotional labels, we employ self-supervised learning.
We show that our approach reaches state-of-the-art performances for emotion recognition using ECG signals on AMIGOS.
arXiv Detail & Related papers (2022-04-08T07:14:55Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - Transformer-based Spatial-Temporal Feature Learning for EEG Decoding [4.8276709243429]
We propose a novel EEG decoding method that mainly relies on the attention mechanism.
We have reached the level of the state-of-the-art in multi-classification of EEG, with fewer parameters.
It has good potential to promote the practicality of brain-computer interface (BCI)
arXiv Detail & Related papers (2021-06-11T00:48:18Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.