EEGFuseNet: Hybrid Unsupervised Deep Feature Characterization and Fusion
for High-Dimensional EEG with An Application to Emotion Recognition
- URL: http://arxiv.org/abs/2102.03777v1
- Date: Sun, 7 Feb 2021 11:09:16 GMT
- Title: EEGFuseNet: Hybrid Unsupervised Deep Feature Characterization and Fusion
for High-Dimensional EEG with An Application to Emotion Recognition
- Authors: Zhen Liang, Rushuang Zhou, Li Zhang, Linling Li, Gan Huang, Zhiguo
Zhang and Shin Ishii
- Abstract summary: We propose a hybrid unsupervised deep CNN-RNN-GAN based EEG feature characterization and fusion model, which is termed as EEGFuseNet.
EEGFuseNet is trained in an unsupervised manner, and deep EEG features covering spatial and temporal dynamics are automatically characterized.
The performance of the extracted deep and low-dimensional features is carefully evaluated in an unsupervised emotion recognition application based on a famous public emotion database.
- Score: 10.234189745183466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to effectively and efficiently extract valid and reliable features from
high-dimensional electroencephalography (EEG), particularly how to fuse the
spatial and temporal dynamic brain information into a better feature
representation, is a critical issue in brain data analysis. Most current EEG
studies are working on handcrafted features with a supervised modeling, which
would be limited by experience and human feedbacks to a great extent. In this
paper, we propose a practical hybrid unsupervised deep CNN-RNN-GAN based EEG
feature characterization and fusion model, which is termed as EEGFuseNet.
EEGFuseNet is trained in an unsupervised manner, and deep EEG features covering
spatial and temporal dynamics are automatically characterized. Comparing to the
handcrafted features, the deep EEG features could be considered to be more
generic and independent of any specific EEG task. The performance of the
extracted deep and low-dimensional features by EEGFuseNet is carefully
evaluated in an unsupervised emotion recognition application based on a famous
public emotion database. The results demonstrate the proposed EEGFuseNet is a
robust and reliable model, which is easy to train and manage and perform
efficiently in the representation and fusion of dynamic EEG features. In
particular, EEGFuseNet is established as an optimal unsupervised fusion model
with promising subject-based leave-one-out results in the recognition of four
emotion dimensions (valence, arousal, dominance and liking), which demonstrates
the possibility of realizing EEG based cross-subject emotion recognition in a
pure unsupervised manner.
Related papers
- Emotion-Agent: Unsupervised Deep Reinforcement Learning with Distribution-Prototype Reward for Continuous Emotional EEG Analysis [2.1645626994550664]
Continuous electroencephalography (EEG) signals are widely used in affective brain-computer interface (aBCI) applications.
We propose a novel unsupervised deep reinforcement learning framework, called Emotion-Agent, to automatically identify relevant and informative emotional moments from EEG signals.
Emotion-Agent is trained using Proximal Policy Optimization (PPO) to achieve stable and efficient convergence.
arXiv Detail & Related papers (2024-08-22T04:29:25Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion [49.1574468325115]
A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
arXiv Detail & Related papers (2023-11-13T22:46:43Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Inter Subject Emotion Recognition Using Spatio-Temporal Features From
EEG Signal [4.316570025748204]
This work is about an easy-to-implement emotion recognition model that classifies emotions from EEG signals subject independently.
The model is a combination of regular, depthwise and separable convolution layers of CNN to classify the emotions.
The model achieved an accuracy of 73.04%.
arXiv Detail & Related papers (2023-05-27T07:43:19Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - EEG2Vec: Learning Affective EEG Representations via Variational
Autoencoders [27.3162026528455]
We explore whether representing neural data, in response to emotional stimuli, in a latent vector space can serve to both predict emotional states.
We propose a conditional variational autoencoder based framework, EEG2Vec, to learn generative-discriminative representations from EEG data.
arXiv Detail & Related papers (2022-07-16T19:25:29Z) - Task-oriented Self-supervised Learning for Anomaly Detection in
Electroencephalography [51.45515911920534]
A task-oriented self-supervised learning approach is proposed to train a more effective anomaly detector.
A specific two branch convolutional neural network with larger kernels is designed as the feature extractor.
The effectively designed and trained feature extractor has shown to be able to extract better feature representations from EEGs.
arXiv Detail & Related papers (2022-07-04T13:15:08Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Investigating EEG-Based Functional Connectivity Patterns for Multimodal
Emotion Recognition [8.356765961526955]
We investigate three functional connectivity network features: strength, clustering, coefficient and eigenvector centrality.
The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public EEG datasets.
We construct a multimodal emotion recognition model by combining the functional connectivity features from EEG and the features from eye movements or physiological signals.
arXiv Detail & Related papers (2020-04-04T16:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.