GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion
Recognition
- URL: http://arxiv.org/abs/2205.01030v1
- Date: Tue, 12 Apr 2022 03:37:21 GMT
- Title: GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion
Recognition
- Authors: Yang Li, Ji Chen, Fu Li, Boxun Fu, Hao Wu, Youshuo Ji, Yijin Zhou, Yi
Niu, Guangming Shi, Wenming Zheng
- Abstract summary: This paper proposes a graph-based multi-task self-supervised learning model (GMSS) for EEG emotion recognition.
By learning from multiple tasks simultaneously, GMSS can find a representation that captures all of the tasks.
Experiments on SEED, SEED-IV, and MPED datasets show that the proposed model has remarkable advantages in learning more discriminative and general features for EEG emotional signals.
- Score: 48.02958969607864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous electroencephalogram (EEG) emotion recognition relies on single-task
learning, which may lead to overfitting and learned emotion features lacking
generalization. In this paper, a graph-based multi-task self-supervised
learning model (GMSS) for EEG emotion recognition is proposed. GMSS has the
ability to learn more general representations by integrating multiple
self-supervised tasks, including spatial and frequency jigsaw puzzle tasks, and
contrastive learning tasks. By learning from multiple tasks simultaneously,
GMSS can find a representation that captures all of the tasks thereby
decreasing the chance of overfitting on the original task, i.e., emotion
recognition task. In particular, the spatial jigsaw puzzle task aims to capture
the intrinsic spatial relationships of different brain regions. Considering the
importance of frequency information in EEG emotional signals, the goal of the
frequency jigsaw puzzle task is to explore the crucial frequency bands for EEG
emotion recognition. To further regularize the learned features and encourage
the network to learn inherent representations, contrastive learning task is
adopted in this work by mapping the transformed data into a common feature
space. The performance of the proposed GMSS is compared with several popular
unsupervised and supervised methods. Experiments on SEED, SEED-IV, and MPED
datasets show that the proposed model has remarkable advantages in learning
more discriminative and general features for EEG emotional signals.
Related papers
- Self-supervised Gait-based Emotion Representation Learning from Selective Strongly Augmented Skeleton Sequences [4.740624855896404]
We propose a contrastive learning framework utilizing selective strong augmentation for self-supervised gait-based emotion representation.
Our approach is validated on the Emotion-Gait (E-Gait) and Emilya datasets and outperforms the state-of-the-art methods under different evaluation protocols.
arXiv Detail & Related papers (2024-05-08T09:13:10Z) - GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective Computing [74.68232970965595]
Multimodal large language models (MLLMs) are designed to process and integrate information from multiple sources, such as text, speech, images, and videos.
This paper assesses the application of MLLMs with 5 crucial abilities for affective computing, spanning from visual affective tasks and reasoning tasks.
arXiv Detail & Related papers (2024-03-09T13:56:25Z) - EMERSK -- Explainable Multimodal Emotion Recognition with Situational
Knowledge [0.0]
We present Explainable Multimodal Emotion Recognition with Situational Knowledge (EMERSK)
EMERSK is a general system for human emotion recognition and explanation using visual information.
Our system can handle multiple modalities, including facial expressions, posture, and gait in a flexible and modular manner.
arXiv Detail & Related papers (2023-06-14T17:52:37Z) - EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversation [34.24557248359872]
We propose an emotional inertia and contagion-driven dependency modeling approach (EmotionIC) for ERC task.
Our EmotionIC consists of three main components, i.e., Identity Masked Multi-Head Attention (IMMHA), Dialogue-based Gated Recurrent Unit (DiaGRU) and Skip-chain Conditional Random Field (SkipCRF)
Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets.
arXiv Detail & Related papers (2023-03-20T13:58:35Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Self-supervised ECG Representation Learning for Emotion Recognition [25.305949034527202]
We exploit a self-supervised deep multi-task learning framework for electrocardiogram (ECG) -based emotion recognition.
We show that the proposed solution considerably improves the performance compared to a network trained using fully-supervised learning.
arXiv Detail & Related papers (2020-02-04T17:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.