Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning
- URL: http://arxiv.org/abs/2308.01919v1
- Date: Wed, 12 Jul 2023 05:55:40 GMT
- Title: Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning
- Authors: Yunfei Guo, Tao Zhang, Wu Huang
- Abstract summary: We propose ME-MHACL, a self-supervised contrastive learning-based multimodal emotion recognition method.
We apply the trained feature extractor to labeled electrophysiological signals and use multi-head attention mechanisms for feature fusion.
Our method outperformed existing benchmark methods in emotion recognition tasks and had good cross-individual generalization ability.
- Score: 3.2536246345549538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion recognition is an important research direction in artificial
intelligence, helping machines understand and adapt to human emotional states.
Multimodal electrophysiological(ME) signals, such as EEG, GSR,
respiration(Resp), and temperature(Temp), are effective biomarkers for
reflecting changes in human emotions. However, using electrophysiological
signals for emotion recognition faces challenges such as data scarcity,
inconsistent labeling, and difficulty in cross-individual generalization. To
address these issues, we propose ME-MHACL, a self-supervised contrastive
learning-based multimodal emotion recognition method that can learn meaningful
feature representations from unlabeled electrophysiological signals and use
multi-head attention mechanisms for feature fusion to improve recognition
performance. Our method includes two stages: first, we use the Meiosis method
to group sample and augment unlabeled electrophysiological signals and design a
self-supervised contrastive learning task; second, we apply the trained feature
extractor to labeled electrophysiological signals and use multi-head attention
mechanisms for feature fusion. We conducted experiments on two public datasets,
DEAP and MAHNOB-HCI, and our method outperformed existing benchmark methods in
emotion recognition tasks and had good cross-individual generalization ability.
Related papers
- Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - A Survey on Physiological Signal Based Emotion Recognition [1.52292571922932]
Existing review papers on emotion recognition based on physiological signals surveyed only the regular steps involved in the workflow of emotion recognition.
This paper reviews the effect of inter-subject data variance on emotion recognition, important data annotation techniques for emotion recognition and their comparison, data preprocessing techniques for each physiological signal, data splitting techniques for improving the generalization of emotion recognition models and different multimodal fusion techniques and their comparison.
arXiv Detail & Related papers (2022-05-20T23:59:44Z) - Contrastive Learning of Subject-Invariant EEG Representations for
Cross-Subject Emotion Recognition [9.07006689672858]
We propose Contrast Learning method for Inter-Subject Alignment (ISA) for reliable cross-subject emotion recognition.
ISA involves maximizing the similarity in EEG signals across subjects when they received the same stimuli in contrast to different ones.
A convolutional neural network with depthwise spatial convolution and temporal convolution layers was applied to learn inter-subject representations from raw EEG signals.
arXiv Detail & Related papers (2021-09-20T14:13:45Z) - Attentive Cross-modal Connections for Deep Multimodal Wearable-based
Emotion Recognition [7.559720049837459]
We present a novel attentive cross-modal connection to share information between convolutional neural networks.
Specifically, these connections improve emotion classification by sharing intermediate representations among EDA and ECG.
Our experiments show that the proposed approach is capable of learning strong multimodal representations and outperforms a number of baselines methods.
arXiv Detail & Related papers (2021-08-04T18:40:32Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features [2.863100352151122]
We propose a deep-learning framework denoted as a dynamic entropy-based pattern learning (DEPL) to abstract informative indicators pertaining to the neurophysiological features among multiple individuals.
DEPL enhanced the capability of representations generated by a deep convolutional neural network by modelling the interdependencies between the cortical locations of dynamical entropy based features.
arXiv Detail & Related papers (2020-09-26T07:22:07Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.