Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition
- URL: http://arxiv.org/abs/2401.02344v1
- Date: Thu, 4 Jan 2024 16:38:47 GMT
- Title: Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition
- Authors: Shadi Sartipi, Mujdat Cetin
- Abstract summary: We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
- Score: 0.5439020425819
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Although deep learning-based algorithms have demonstrated excellent
performance in automated emotion recognition via electroencephalogram (EEG)
signals, variations across brain signal patterns of individuals can diminish
the model's effectiveness when applied across different subjects. While
transfer learning techniques have exhibited promising outcomes, they still
encounter challenges related to inadequate feature representations and may
overlook the fact that source subjects themselves can possess distinct
characteristics. In this work, we propose a multi-source domain adaptation
approach with a transformer-based feature generator (MSDA-TF) designed to
leverage information from multiple sources. The proposed feature generator
retains convolutional layers to capture shallow spatial, temporal, and spectral
EEG data representations, while self-attention mechanisms extract global
dependencies within these features. During the adaptation process, we group the
source subjects based on correlation values and aim to align the moments of the
target subject with each source as well as within the sources. MSDA-TF is
validated on the SEED dataset and is shown to yield promising results.
Related papers
- Automatic Classification of Sleep Stages from EEG Signals Using Riemannian Metrics and Transformer Networks [6.404789669795639]
In sleep medicine, assessing the evolution of a subject's sleep often involves the costly manual scoring of electroencephalographic (EEG) signals.
We present a novel way of integrating learned signal-wise features into said matrices without sacrificing their Symmetric Definite Positive (SPD) nature.
arXiv Detail & Related papers (2024-10-18T06:49:52Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion [49.1574468325115]
A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
arXiv Detail & Related papers (2023-11-13T22:46:43Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Exploiting Multiple EEG Data Domains with Adversarial Learning [20.878816519635304]
We propose an adversarial inference approach to learn data-source invariant representations in this context.
We unify EEG recordings from different source domains (i.e., emotion recognition SEED, SEED-IV, DEAP, DREAMER)
arXiv Detail & Related papers (2022-04-16T11:09:20Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - GANSER: A Self-supervised Data Augmentation Framework for EEG-based
Emotion Recognition [15.812231441367022]
We propose a novel data augmentation framework, namely Generative Adversarial Network-based Self-supervised Data Augmentation (GANSER)
As the first to combine adversarial training with self-supervised learning for EEG-based emotion recognition, the proposed framework can generate high-quality simulated EEG samples.
A transformation function is employed to mask parts of EEG signals and force the generator to synthesize potential EEG signals based on the remaining parts.
arXiv Detail & Related papers (2021-09-07T14:42:55Z) - MS-MDA: Multisource Marginal Distribution Adaptation for Cross-subject
and Cross-session EEG Emotion Recognition [14.065932956210336]
We propose a multi-source marginal distribution adaptation (MS-MDA) for EEG emotion recognition.
First, we assume that different EEG data share the same low-level features, then we construct independent branches to adopt one-to-one domain adaptation and extract domain-specific features.
Experimental results show that the MS-MDA outperforms the comparison methods and state-of-the-art models in cross-session and cross-subject transfer scenarios.
arXiv Detail & Related papers (2021-07-16T07:19:54Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.