Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning
- URL: http://arxiv.org/abs/2308.01919v1
- Date: Wed, 12 Jul 2023 05:55:40 GMT
- Title: Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning
- Authors: Yunfei Guo, Tao Zhang, Wu Huang
- Abstract summary: We propose ME-MHACL, a self-supervised contrastive learning-based multimodal emotion recognition method.
We apply the trained feature extractor to labeled electrophysiological signals and use multi-head attention mechanisms for feature fusion.
Our method outperformed existing benchmark methods in emotion recognition tasks and had good cross-individual generalization ability.
- Score: 3.2536246345549538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion recognition is an important research direction in artificial
intelligence, helping machines understand and adapt to human emotional states.
Multimodal electrophysiological(ME) signals, such as EEG, GSR,
respiration(Resp), and temperature(Temp), are effective biomarkers for
reflecting changes in human emotions. However, using electrophysiological
signals for emotion recognition faces challenges such as data scarcity,
inconsistent labeling, and difficulty in cross-individual generalization. To
address these issues, we propose ME-MHACL, a self-supervised contrastive
learning-based multimodal emotion recognition method that can learn meaningful
feature representations from unlabeled electrophysiological signals and use
multi-head attention mechanisms for feature fusion to improve recognition
performance. Our method includes two stages: first, we use the Meiosis method
to group sample and augment unlabeled electrophysiological signals and design a
self-supervised contrastive learning task; second, we apply the trained feature
extractor to labeled electrophysiological signals and use multi-head attention
mechanisms for feature fusion. We conducted experiments on two public datasets,
DEAP and MAHNOB-HCI, and our method outperformed existing benchmark methods in
emotion recognition tasks and had good cross-individual generalization ability.
Related papers
- Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors [63.194053817609024]
We introduce eye behaviors as an important emotional cues for the creation of a new Eye-behavior-aided Multimodal Emotion Recognition dataset.
For the first time, we provide annotations for both Emotion Recognition (ER) and Facial Expression Recognition (FER) in the EMER dataset.
We specifically design a new EMERT architecture to concurrently enhance performance in both ER and FER.
arXiv Detail & Related papers (2024-11-08T04:53:55Z) - Complex Emotion Recognition System using basic emotions via Facial Expression, EEG, and ECG Signals: a review [1.8310098790941458]
The Complex Emotion Recognition System (CERS) deciphers complex emotional states by examining combinations of basic emotions expressed, their interconnections, and the dynamic variations.
The development of AI systems for discerning complex emotions poses a substantial challenge with significant implications for affective computing.
incorporating physiological signals such as Electrocardiogram (ECG) and Electroencephalogram (EEG) can notably enhance CERS.
arXiv Detail & Related papers (2024-09-09T05:06:10Z) - Decoding Human Emotions: Analyzing Multi-Channel EEG Data using LSTM Networks [0.0]
This study aims to understand and improve the predictive accuracy of emotional state classification by applying a Long Short-Term Memory (LSTM) network to analyze EEG signals.
Using a popular dataset of multi-channel EEG recordings known as DEAP, we look towards leveraging LSTM networks' properties to handle temporal dependencies within EEG signal data.
We obtain accuracies of 89.89%, 90.33%, 90.70%, and 90.54% for arousal, valence, dominance, and likeness, respectively, demonstrating significant improvements in emotion recognition model capabilities.
arXiv Detail & Related papers (2024-08-19T18:10:47Z) - Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - A Survey on Physiological Signal Based Emotion Recognition [1.52292571922932]
Existing review papers on emotion recognition based on physiological signals surveyed only the regular steps involved in the workflow of emotion recognition.
This paper reviews the effect of inter-subject data variance on emotion recognition, important data annotation techniques for emotion recognition and their comparison, data preprocessing techniques for each physiological signal, data splitting techniques for improving the generalization of emotion recognition models and different multimodal fusion techniques and their comparison.
arXiv Detail & Related papers (2022-05-20T23:59:44Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features [2.863100352151122]
We propose a deep-learning framework denoted as a dynamic entropy-based pattern learning (DEPL) to abstract informative indicators pertaining to the neurophysiological features among multiple individuals.
DEPL enhanced the capability of representations generated by a deep convolutional neural network by modelling the interdependencies between the cortical locations of dynamical entropy based features.
arXiv Detail & Related papers (2020-09-26T07:22:07Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.