Positional-Spectral-Temporal Attention in 3D Convolutional Neural
Networks for EEG Emotion Recognition
- URL: http://arxiv.org/abs/2110.09955v1
- Date: Wed, 13 Oct 2021 12:03:36 GMT
- Title: Positional-Spectral-Temporal Attention in 3D Convolutional Neural
Networks for EEG Emotion Recognition
- Authors: Jiyao Liu, Yanxi Zhao, Hao Wu, Dongmei Jiang
- Abstract summary: We propose a novel structure to explore the informative EEG features for emotion recognition.
The PST-Attention module consists of Positional, Spectral and Temporal Attention modules.
Our method is adaptive as well as efficient which can be fit into 3D Convolutional Neural Networks (3D-CNN) as a plug-in module.
- Score: 5.995842375590237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recognizing the feelings of human beings plays a critical role in our daily
communication. Neuroscience has demonstrated that different emotion states
present different degrees of activation in different brain regions, EEG
frequency bands and temporal stamps. In this paper, we propose a novel
structure to explore the informative EEG features for emotion recognition. The
proposed module, denoted by PST-Attention, consists of Positional, Spectral and
Temporal Attention modules to explore more discriminative EEG features.
Specifically, the Positional Attention module is to capture the activate
regions stimulated by different emotions in the spatial dimension. The Spectral
and Temporal Attention modules assign the weights of different frequency bands
and temporal slices respectively. Our method is adaptive as well as efficient
which can be fit into 3D Convolutional Neural Networks (3D-CNN) as a plug-in
module. We conduct experiments on two real-world datasets. 3D-CNN combined with
our module achieves promising results and demonstrate that the PST-Attention is
able to capture stable patterns for emotion recognition from EEG.
Related papers
- Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Inter Subject Emotion Recognition Using Spatio-Temporal Features From
EEG Signal [4.316570025748204]
This work is about an easy-to-implement emotion recognition model that classifies emotions from EEG signals subject independently.
The model is a combination of regular, depthwise and separable convolution layers of CNN to classify the emotions.
The model achieved an accuracy of 73.04%.
arXiv Detail & Related papers (2023-05-27T07:43:19Z) - EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversation [34.24557248359872]
We propose an emotional inertia and contagion-driven dependency modeling approach (EmotionIC) for ERC task.
Our EmotionIC consists of three main components, i.e., Identity Masked Multi-Head Attention (IMMHA), Dialogue-based Gated Recurrent Unit (DiaGRU) and Skip-chain Conditional Random Field (SkipCRF)
Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets.
arXiv Detail & Related papers (2023-03-20T13:58:35Z) - Controllable Radiance Fields for Dynamic Face Synthesis [125.48602100893845]
We study how to explicitly control generative model synthesis of face dynamics exhibiting non-rigid motion.
Controllable Radiance Field (CoRF)
On head image/video data we show that CoRFs are 3D-aware while enabling editing of identity, viewing directions, and motion.
arXiv Detail & Related papers (2022-10-11T23:17:31Z) - GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion
Recognition [48.02958969607864]
This paper proposes a graph-based multi-task self-supervised learning model (GMSS) for EEG emotion recognition.
By learning from multiple tasks simultaneously, GMSS can find a representation that captures all of the tasks.
Experiments on SEED, SEED-IV, and MPED datasets show that the proposed model has remarkable advantages in learning more discriminative and general features for EEG emotional signals.
arXiv Detail & Related papers (2022-04-12T03:37:21Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG
for Emotion Recognition [9.825158483198113]
TSception is a multi-scale convolutional neural network to learn temporal dynamics from affective electroencephalogram (EEG)
The proposed method can be utilized in emotion regulation therapy for emotion recognition in the future.
arXiv Detail & Related papers (2021-04-07T06:10:01Z) - 4D Attention-based Neural Network for EEG Emotion Recognition [1.5749416770494706]
We present a novel method, called four-dimensional attention-based neural network (4D-aNN) for EEG emotion recognition.
The proposed 4D-aNN adopts spectral and spatial attention mechanisms to adaptively assign the weights of different brain regions and frequency bands.
Our model achieves state-of-the-art performance on the SEED dataset under intra-subject splitting.
arXiv Detail & Related papers (2021-01-14T07:41:48Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - An End-to-End Visual-Audio Attention Network for Emotion Recognition in
User-Generated Videos [64.91614454412257]
We propose to recognize video emotions in an end-to-end manner based on convolutional neural networks (CNNs)
Specifically, we develop a deep Visual-Audio Attention Network (VAANet), a novel architecture that integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN.
arXiv Detail & Related papers (2020-02-12T15:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.