Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features
- URL: http://arxiv.org/abs/2009.12525v2
- Date: Tue, 25 May 2021 08:03:28 GMT
- Title: Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features
- Authors: Xiaolong Zhong and Zhong Yin
- Abstract summary: We propose a deep-learning framework denoted as a dynamic entropy-based pattern learning (DEPL) to abstract informative indicators pertaining to the neurophysiological features among multiple individuals.
DEPL enhanced the capability of representations generated by a deep convolutional neural network by modelling the interdependencies between the cortical locations of dynamical entropy based features.
- Score: 2.863100352151122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Use of the electroencephalogram (EEG) and machine learning approaches to
recognize emotions can facilitate affective human computer interactions.
However, the type of EEG data constitutes an obstacle for cross-individual EEG
feature modelling and classification. To address this issue, we propose a
deep-learning framework denoted as a dynamic entropy-based pattern learning
(DEPL) to abstract informative indicators pertaining to the neurophysiological
features among multiple individuals. DEPL enhanced the capability of
representations generated by a deep convolutional neural network by modelling
the interdependencies between the cortical locations of dynamical entropy based
features. The effectiveness of the DEPL has been validated with two public
databases, commonly referred to as the DEAP and MAHNOB-HCI multimodal tagging
databases. Specifically, the leave one subject out training and testing
paradigm has been applied. Numerous experiments on EEG emotion recognition
demonstrate that the proposed DEPL is superior to those traditional machine
learning (ML) methods, and could learn between electrode dependencies w.r.t.
different emotions, which is meaningful for developing the effective
human-computer interaction systems by adapting to human emotions in the real
world applications.
Related papers
- Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning [3.2536246345549538]
We propose ME-MHACL, a self-supervised contrastive learning-based multimodal emotion recognition method.
We apply the trained feature extractor to labeled electrophysiological signals and use multi-head attention mechanisms for feature fusion.
Our method outperformed existing benchmark methods in emotion recognition tasks and had good cross-individual generalization ability.
arXiv Detail & Related papers (2023-07-12T05:55:40Z) - A Hierarchical Regression Chain Framework for Affective Vocal Burst
Recognition [72.36055502078193]
We propose a hierarchical framework, based on chain regression models, for affective recognition from vocal bursts.
To address the challenge of data sparsity, we also use self-supervised learning (SSL) representations with layer-wise and temporal aggregation modules.
The proposed systems participated in the ACII Affective Vocal Burst (A-VB) Challenge 2022 and ranked first in the "TWO'' and "CULTURE" tasks.
arXiv Detail & Related papers (2023-03-14T16:08:45Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Locally temporal-spatial pattern learning with graph attention mechanism
for EEG-based emotion recognition [4.331986787747648]
Technique of emotion recognition enables computers to classify human affective states into discrete categories.
The emotion may fluctuate instead of maintaining a stable state even within a short time interval.
There is also a difficulty to take the full use of the EEG spatial distribution due to its 3-D topology structure.
arXiv Detail & Related papers (2022-08-19T12:15:10Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Attentive Cross-modal Connections for Deep Multimodal Wearable-based
Emotion Recognition [7.559720049837459]
We present a novel attentive cross-modal connection to share information between convolutional neural networks.
Specifically, these connections improve emotion classification by sharing intermediate representations among EDA and ECG.
Our experiments show that the proposed approach is capable of learning strong multimodal representations and outperforms a number of baselines methods.
arXiv Detail & Related papers (2021-08-04T18:40:32Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.