CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion
- URL: http://arxiv.org/abs/2311.07788v1
- Date: Mon, 13 Nov 2023 22:46:43 GMT
- Title: CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion
- Authors: Anders Vestergaard N{\o}rskov, Alexander Neergaard Zahid and Morten
M{\o}rup
- Abstract summary: A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Electroencephalography (EEG) is a prominent non-invasive neuroimaging
technique providing insights into brain function. Unfortunately, EEG data
exhibit a high degree of noise and variability across subjects hampering
generalizable signal extraction. Therefore, a key aim in EEG analysis is to
extract the underlying neural activation (content) as well as to account for
the individual subject variability (style). We hypothesize that the ability to
convert EEG signals between tasks and subjects requires the extraction of
latent representations accounting for content and style. Inspired by recent
advancements in voice conversion technologies, we propose a novel contrastive
split-latent permutation autoencoder (CSLP-AE) framework that directly
optimizes for EEG conversion. Importantly, the latent representations are
guided using contrastive learning to promote the latent splits to explicitly
represent subject (style) and task (content). We contrast CSLP-AE to
conventional supervised, unsupervised (AE), and self-supervised (contrastive
learning) training and find that the proposed approach provides favorable
generalizable characterizations of subject and task. Importantly, the procedure
also enables zero-shot conversion between unseen subjects. While the present
work only considers conversion of EEG, the proposed CSLP-AE provides a general
framework for signal conversion and extraction of content (task activation) and
style (subject variability) components of general interest for the modeling and
analysis of biological signals.
Related papers
- Subject-Adaptive Transfer Learning Using Resting State EEG Signals for Cross-Subject EEG Motor Imagery Classification [10.487161620381785]
We propose a novel subject-adaptive transfer learning strategy that utilizes RS EEG signals to adapt models on unseen subject data.
The proposed method achieves state-of-the-art accuracy on three public benchmarks, demonstrating the effectiveness of our method in cross-subject EEG MI classification.
arXiv Detail & Related papers (2024-05-17T20:36:04Z) - Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - Agent-driven Generative Semantic Communication with Cross-Modality and Prediction [57.335922373309074]
We propose a novel agent-driven generative semantic communication framework based on reinforcement learning.
In this work, we develop an agent-assisted semantic encoder with cross-modality capability, which can track the semantic changes, channel condition, to perform adaptive semantic extraction and sampling.
The effectiveness of the designed models has been verified using the UA-DETRAC dataset, demonstrating the performance gains of the overall A-GSC framework.
arXiv Detail & Related papers (2024-04-10T13:24:27Z) - EEG decoding with conditional identification information [7.873458431535408]
Decoding EEG signals is crucial for unraveling human brain and advancing brain-computer interfaces.
Traditional machine learning algorithms have been hindered by the high noise levels and inherent inter-person variations in EEG signals.
Recent advances in deep neural networks (DNNs) have shown promise, owing to their advanced nonlinear modeling capabilities.
arXiv Detail & Related papers (2024-03-21T13:38:59Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition [0.5439020425819]
We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
arXiv Detail & Related papers (2024-01-04T16:38:47Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - EEG-ConvTransformer for Single-Trial EEG based Visual Stimuli
Classification [5.076419064097734]
This work introduces an EEG-ConvTranformer network that is based on multi-headed self-attention.
It achieves improved classification accuracy over the state-of-the-art techniques across five different visual stimuli classification tasks.
arXiv Detail & Related papers (2021-07-08T17:22:04Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.