Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks
- URL: http://arxiv.org/abs/2106.03461v3
- Date: Tue, 21 Dec 2021 04:05:02 GMT
- Title: Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks
- Authors: Arjun, Aniket Singh Rajpoot, Mahesh Raveendranatha Panicker
- Abstract summary: A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
- Score: 2.76240219662896
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the recent past, deep learning-based approaches have significantly
improved the classification accuracy when compared to classical signal
processing and machine learning based frameworks. But most of them were
subject-dependent studies which were not able to generalize on the
subject-independent tasks due to the inter-subject variability present in EEG
data. In this work, a novel deep learning framework capable of doing
subject-independent emotion recognition is presented, consisting of two parts.
First, an unsupervised Long Short-Term Memory (LSTM) with channel-attention
autoencoder is proposed for getting a subject-invariant latent vector subspace
i.e., intrinsic variables present in the EEG data of each individual. Secondly,
a convolutional neural network (CNN) with attention framework is presented for
performing the task of subject-independent emotion recognition on the encoded
lower dimensional latent space representations obtained from the proposed LSTM
with channel-attention autoencoder. With the attention mechanism, the proposed
approach could highlight the significant time-segments of the EEG signal, which
contributes to the emotion under consideration as validated by the results. The
proposed approach has been validated using publicly available datasets for EEG
signals such as DEAP dataset, SEED dataset and CHB-MIT dataset. The proposed
end-to-end deep learning framework removes the requirement of different hand
engineered features and provides a single comprehensive task agnostic EEG
analysis tool capable of performing various kinds of EEG analysis on subject
independent data.
Related papers
- Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition [0.5439020425819]
We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
arXiv Detail & Related papers (2024-01-04T16:38:47Z) - CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion [49.1574468325115]
A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
arXiv Detail & Related papers (2023-11-13T22:46:43Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-based Emotion Recognition [19.578050094283313]
The DS-AGC framework is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition.
The proposed model outperforms existing methods under different incomplete label conditions.
arXiv Detail & Related papers (2023-08-13T23:54:40Z) - A Hybrid End-to-End Spatio-Temporal Attention Neural Network with
Graph-Smooth Signals for EEG Emotion Recognition [1.6328866317851187]
We introduce a deep neural network that acquires interpretable representations by a hybrid structure of network-temporal encoding and recurrent attention blocks.
We demonstrate that our proposed architecture exceeds state-of-the-art results for emotion classification on the publicly available DEAP dataset.
arXiv Detail & Related papers (2023-07-06T15:35:14Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.