Investigating EEG-Based Functional Connectivity Patterns for Multimodal
Emotion Recognition
- URL: http://arxiv.org/abs/2004.01973v1
- Date: Sat, 4 Apr 2020 16:51:56 GMT
- Title: Investigating EEG-Based Functional Connectivity Patterns for Multimodal
Emotion Recognition
- Authors: Xun Wu, Wei-Long Zheng, and Bao-Liang Lu
- Abstract summary: We investigate three functional connectivity network features: strength, clustering, coefficient and eigenvector centrality.
The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public EEG datasets.
We construct a multimodal emotion recognition model by combining the functional connectivity features from EEG and the features from eye movements or physiological signals.
- Score: 8.356765961526955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared with the rich studies on the motor brain-computer interface (BCI),
the recently emerging affective BCI presents distinct challenges since the
brain functional connectivity networks involving emotion are not well
investigated. Previous studies on emotion recognition based on
electroencephalography (EEG) signals mainly rely on single-channel-based
feature extraction methods. In this paper, we propose a novel emotion-relevant
critical subnetwork selection algorithm and investigate three EEG functional
connectivity network features: strength, clustering coefficient, and
eigenvector centrality. The discrimination ability of the EEG connectivity
features in emotion recognition is evaluated on three public emotion EEG
datasets: SEED, SEED-V, and DEAP. The strength feature achieves the best
classification performance and outperforms the state-of-the-art differential
entropy feature based on single-channel analysis. The experimental results
reveal that distinct functional connectivity patterns are exhibited for the
five emotions of disgust, fear, sadness, happiness, and neutrality.
Furthermore, we construct a multimodal emotion recognition model by combining
the functional connectivity features from EEG and the features from eye
movements or physiological signals using deep canonical correlation analysis.
The classification accuracies of multimodal emotion recognition are 95.08/6.42%
on the SEED dataset, 84.51/5.11% on the SEED-V dataset, and 85.34/2.90% and
86.61/3.76% for arousal and valence on the DEAP dataset, respectively. The
results demonstrate the complementary representation properties of the EEG
connectivity features with eye movement data. In addition, we find that the
brain networks constructed with 18 channels achieve comparable performance with
that of the 62-channel network in multimodal emotion recognition and enable
easier setups for BCI systems in real scenarios.
Related papers
- Decoding Human Emotions: Analyzing Multi-Channel EEG Data using LSTM Networks [0.0]
This study aims to understand and improve the predictive accuracy of emotional state classification by applying a Long Short-Term Memory (LSTM) network to analyze EEG signals.
Using a popular dataset of multi-channel EEG recordings known as DEAP, we look towards leveraging LSTM networks' properties to handle temporal dependencies within EEG signal data.
We obtain accuracies of 89.89%, 90.33%, 90.70%, and 90.54% for arousal, valence, dominance, and likeness, respectively, demonstrating significant improvements in emotion recognition model capabilities.
arXiv Detail & Related papers (2024-08-19T18:10:47Z) - Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Progressive Graph Convolution Network for EEG Emotion Recognition [35.08010382523394]
Studies in the area of neuroscience have revealed the relationship between emotional patterns and brain functional regions.
In EEG emotion recognition, we can observe that clearer boundaries exist between coarse-grained emotions than those between fine-grained emotions.
We propose a progressive graph convolution network (PGCN) for capturing this inherent characteristic in EEG emotional signals.
arXiv Detail & Related papers (2021-12-14T03:30:13Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - Contrastive Learning of Subject-Invariant EEG Representations for
Cross-Subject Emotion Recognition [9.07006689672858]
We propose Contrast Learning method for Inter-Subject Alignment (ISA) for reliable cross-subject emotion recognition.
ISA involves maximizing the similarity in EEG signals across subjects when they received the same stimuli in contrast to different ones.
A convolutional neural network with depthwise spatial convolution and temporal convolution layers was applied to learn inter-subject representations from raw EEG signals.
arXiv Detail & Related papers (2021-09-20T14:13:45Z) - SFE-Net: EEG-based Emotion Recognition with Symmetrical Spatial Feature
Extraction [1.8047694351309205]
We present a spatial folding ensemble network (SFENet) for EEG feature extraction and emotion recognition.
Motivated by the spatial symmetry mechanism of human brain, we fold the input EEG channel data with five different symmetrical strategies.
With this network, the spatial features of different symmetric folding signlas can be extracted simultaneously, which greatly improves the robustness and accuracy of feature recognition.
arXiv Detail & Related papers (2021-04-09T12:59:38Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.