GANSER: A Self-supervised Data Augmentation Framework for EEG-based
Emotion Recognition
- URL: http://arxiv.org/abs/2109.03124v2
- Date: Wed, 8 Sep 2021 01:00:06 GMT
- Title: GANSER: A Self-supervised Data Augmentation Framework for EEG-based
Emotion Recognition
- Authors: Zhi Zhang and Sheng-hua Zhong and Yan Liu
- Abstract summary: We propose a novel data augmentation framework, namely Generative Adversarial Network-based Self-supervised Data Augmentation (GANSER)
As the first to combine adversarial training with self-supervised learning for EEG-based emotion recognition, the proposed framework can generate high-quality simulated EEG samples.
A transformation function is employed to mask parts of EEG signals and force the generator to synthesize potential EEG signals based on the remaining parts.
- Score: 15.812231441367022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data scarcity problem in Electroencephalography (EEG) based affective
computing results into difficulty in building an effective model with high
accuracy and stability using machine learning algorithms especially deep
learning models. Data augmentation has recently achieved considerable
performance improvement for deep learning models: increased accuracy,
stability, and reduced over-fitting. In this paper, we propose a novel data
augmentation framework, namely Generative Adversarial Network-based
Self-supervised Data Augmentation (GANSER). As the first to combine adversarial
training with self-supervised learning for EEG-based emotion recognition, the
proposed framework can generate high-quality and high-diversity simulated EEG
samples. In particular, we utilize adversarial training to learn an EEG
generator and force the generated EEG signals to approximate the distribution
of real samples, ensuring the quality of augmented samples. A transformation
function is employed to mask parts of EEG signals and force the generator to
synthesize potential EEG signals based on the remaining parts, to produce a
wide variety of samples. The masking possibility during transformation is
introduced as prior knowledge to guide to extract distinguishable features for
simulated EEG signals and generalize the classifier to the augmented sample
space. Finally, extensive experiments demonstrate our proposed method can help
emotion recognition for performance gain and achieve state-of-the-art results.
Related papers
- Enhancing EEG Signal Generation through a Hybrid Approach Integrating Reinforcement Learning and Diffusion Models [6.102274021710727]
This study introduces an innovative approach to the synthesis of Electroencephalogram (EEG) signals by integrating diffusion models with reinforcement learning.
Our methodology enhances the generation of EEG signals with detailed temporal and spectral features, enriching the authenticity and diversity of synthetic datasets.
arXiv Detail & Related papers (2024-09-14T07:22:31Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - EEGFormer: Towards Transferable and Interpretable Large-Scale EEG
Foundation Model [39.363511340878624]
We present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data.
To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings.
arXiv Detail & Related papers (2024-01-11T17:36:24Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - EEG Synthetic Data Generation Using Probabilistic Diffusion Models [0.0]
This study proposes an advanced methodology for data augmentation: generating synthetic EEG data using denoising diffusion probabilistic models.
The synthetic data are generated from electrode-frequency distribution maps (EFDMs) of emotionally labeled EEG recordings.
The proposed methodology has potential implications for the broader field of neuroscience research by enabling the creation of large, publicly available synthetic EEG datasets.
arXiv Detail & Related papers (2023-03-06T12:03:22Z) - EEG2Vec: Learning Affective EEG Representations via Variational
Autoencoders [27.3162026528455]
We explore whether representing neural data, in response to emotional stimuli, in a latent vector space can serve to both predict emotional states.
We propose a conditional variational autoencoder based framework, EEG2Vec, to learn generative-discriminative representations from EEG data.
arXiv Detail & Related papers (2022-07-16T19:25:29Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Data augmentation for learning predictive models on EEG: a systematic
comparison [79.84079335042456]
deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years.
Deep learning for EEG classification tasks has been limited by the relatively small size of EEG datasets.
Data augmentation has been a key ingredient to obtain state-of-the-art performances across applications such as computer vision or speech.
arXiv Detail & Related papers (2022-06-29T09:18:15Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep
Generative Models [13.56090099952884]
We propose three methods for augmenting EEG training data to enhance the performance of emotion recognition models.
For the full usage strategy, all of the generated data are augmented to the training dataset without judging the quality of the generated data.
The experimental results demonstrate that the augmented training datasets produced by our methods enhance the performance of EEG-based emotion recognition models.
arXiv Detail & Related papers (2020-06-04T21:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.