Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep
Generative Models
- URL: http://arxiv.org/abs/2006.05331v2
- Date: Wed, 17 Jun 2020 08:09:40 GMT
- Title: Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep
Generative Models
- Authors: Yun Luo and Li-Zhen Zhu and Zi-Yu Wan and Bao-Liang Lu
- Abstract summary: We propose three methods for augmenting EEG training data to enhance the performance of emotion recognition models.
For the full usage strategy, all of the generated data are augmented to the training dataset without judging the quality of the generated data.
The experimental results demonstrate that the augmented training datasets produced by our methods enhance the performance of EEG-based emotion recognition models.
- Score: 13.56090099952884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data scarcity problem in emotion recognition from electroencephalography
(EEG) leads to difficulty in building an affective model with high accuracy
using machine learning algorithms or deep neural networks. Inspired by emerging
deep generative models, we propose three methods for augmenting EEG training
data to enhance the performance of emotion recognition models. Our proposed
methods are based on two deep generative models, variational autoencoder (VAE)
and generative adversarial network (GAN), and two data augmentation strategies.
For the full usage strategy, all of the generated data are augmented to the
training dataset without judging the quality of the generated data, while for
partial usage, only high-quality data are selected and appended to the training
dataset. These three methods are called conditional Wasserstein GAN (cWGAN),
selective VAE (sVAE), and selective WGAN (sWGAN). To evaluate the effectiveness
of these methods, we perform a systematic experimental study on two public EEG
datasets for emotion recognition, namely, SEED and DEAP. We first generate
realistic-like EEG training data in two forms: power spectral density and
differential entropy. Then, we augment the original training datasets with a
different number of generated realistic-like EEG data. Finally, we train
support vector machines and deep neural networks with shortcut layers to build
affective models using the original and augmented training datasets. The
experimental results demonstrate that the augmented training datasets produced
by our methods enhance the performance of EEG-based emotion recognition models
and outperform the existing data augmentation methods such as conditional VAE,
Gaussian noise, and rotational data augmentation.
Related papers
- Enhancing EEG Signal Generation through a Hybrid Approach Integrating Reinforcement Learning and Diffusion Models [6.102274021710727]
This study introduces an innovative approach to the synthesis of Electroencephalogram (EEG) signals by integrating diffusion models with reinforcement learning.
Our methodology enhances the generation of EEG signals with detailed temporal and spectral features, enriching the authenticity and diversity of synthetic datasets.
arXiv Detail & Related papers (2024-09-14T07:22:31Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - hvEEGNet: exploiting hierarchical VAEs on EEG data for neuroscience
applications [3.031375888004876]
Two main issues challenge the existing DL-based modeling methods for EEG.
High variability between subjects and low signal-to-noise ratio make it difficult to ensure a good quality in the EEG data.
We propose two variational autoencoder models, namely vEEGNet-ver3 and hvEEGNet, to target the problem of high-fidelity EEG reconstruction.
arXiv Detail & Related papers (2023-11-20T15:36:31Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Data augmentation for learning predictive models on EEG: a systematic
comparison [79.84079335042456]
deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years.
Deep learning for EEG classification tasks has been limited by the relatively small size of EEG datasets.
Data augmentation has been a key ingredient to obtain state-of-the-art performances across applications such as computer vision or speech.
arXiv Detail & Related papers (2022-06-29T09:18:15Z) - Towards physiology-informed data augmentation for EEG-based BCIs [24.15108821320151]
We suggest a novel technique for augmenting the training data by generating new data from the data set at hand.
In this manuscript, we explain the method and show first preliminary results for participant-independent motor-imagery classification.
arXiv Detail & Related papers (2022-03-27T20:59:40Z) - GANSER: A Self-supervised Data Augmentation Framework for EEG-based
Emotion Recognition [15.812231441367022]
We propose a novel data augmentation framework, namely Generative Adversarial Network-based Self-supervised Data Augmentation (GANSER)
As the first to combine adversarial training with self-supervised learning for EEG-based emotion recognition, the proposed framework can generate high-quality simulated EEG samples.
A transformation function is employed to mask parts of EEG signals and force the generator to synthesize potential EEG signals based on the remaining parts.
arXiv Detail & Related papers (2021-09-07T14:42:55Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - ScalingNet: extracting features from raw EEG data for emotion
recognition [4.047737925426405]
We propose a novel convolutional layer allowing to adaptively extract effective data-driven spectrogram-like features from raw EEG signals.
The proposed neural network architecture based on the scaling layer, references as ScalingNet, has achieved the state-of-the-art result across the established DEAP benchmark dataset.
arXiv Detail & Related papers (2021-02-07T08:54:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.