Latent Alignment with Deep Set EEG Decoders
- URL: http://arxiv.org/abs/2311.17968v1
- Date: Wed, 29 Nov 2023 12:40:45 GMT
- Title: Latent Alignment with Deep Set EEG Decoders
- Authors: Stylianos Bakas, Siegfried Ludwig, Dimitrios A. Adamos, Nikolaos
Laskaris, Yannis Panagakis and Stefanos Zafeiriou
- Abstract summary: We introduce the Latent Alignment method that won the Benchmarks for EEG Transfer Learning competition.
We present its formulation as a deep set applied on the set of trials from a given subject.
Our experimental results show that performing statistical distribution alignment at later stages in a deep learning model is beneficial to the classification accuracy.
- Score: 44.128689862889715
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The variability in EEG signals between different individuals poses a
significant challenge when implementing brain-computer interfaces (BCI).
Commonly proposed solutions to this problem include deep learning models, due
to their increased capacity and generalization, as well as explicit domain
adaptation techniques. Here, we introduce the Latent Alignment method that won
the Benchmarks for EEG Transfer Learning (BEETL) competition and present its
formulation as a deep set applied on the set of trials from a given subject.
Its performance is compared to recent statistical domain adaptation techniques
under various conditions. The experimental paradigms include motor imagery
(MI), oddball event-related potentials (ERP) and sleep stage classification,
where different well-established deep learning models are applied on each task.
Our experimental results show that performing statistical distribution
alignment at later stages in a deep learning model is beneficial to the
classification accuracy, yielding the highest performance for our proposed
method. We further investigate practical considerations that arise in the
context of using deep learning and statistical alignment for EEG decoding. In
this regard, we study class-discriminative artifacts that can spuriously
improve results for deep learning models, as well as the impact of
class-imbalance on alignment. We delineate a trade-off relationship between
increased classification accuracy when alignment is performed at later modeling
stages, and susceptibility to class-imbalance in the set of trials that the
statistics are computed on.
Related papers
- Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - SleepEGAN: A GAN-enhanced Ensemble Deep Learning Model for Imbalanced
Classification of Sleep Stages [4.649202082648198]
This paper develops a generative adversarial network (GAN)-powered ensemble deep learning model, named SleepEGAN, for the imbalanced classification of sleep stages.
We show that the proposed method can improve classification accuracy compared to several existing state-of-the-art methods using three public sleep datasets.
arXiv Detail & Related papers (2023-07-04T01:56:00Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - BALanCe: Deep Bayesian Active Learning via Equivalence Class Annealing [7.9107076476763885]
BALanCe is a deep active learning framework that mitigates the effect of uncertainty estimates.
Batch-BALanCe is a generalization of the sequential algorithm to the batched setting.
We show that Batch-BALanCe achieves state-of-the-art performance on several benchmark datasets for active learning.
arXiv Detail & Related papers (2021-12-27T15:38:27Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Deep Stable Learning for Out-Of-Distribution Generalization [27.437046504902938]
Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution.
Eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models.
We propose to address this problem by removing the dependencies between features via learning weights for training samples.
arXiv Detail & Related papers (2021-04-16T03:54:21Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.