Distributionally Robust Cross Subject EEG Decoding
- URL: http://arxiv.org/abs/2308.11651v1
- Date: Sat, 19 Aug 2023 11:31:33 GMT
- Title: Distributionally Robust Cross Subject EEG Decoding
- Authors: Tiehang Duan, Zhenyi Wang, Gianfranco Doretto, Fang Li, Cui Tao,
Donald Adjeroh
- Abstract summary: We propose a principled approach to perform dynamic evolution on the data for improvement of decoding robustness.
We derived a general data evolution framework based on Wasserstein gradient flow (WGF) and provides two different forms of evolution within the framework.
The proposed approach can be readily integrated with other data augmentation approaches for further improvements.
- Score: 15.211091130230589
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, deep learning has shown to be effective for Electroencephalography
(EEG) decoding tasks. Yet, its performance can be negatively influenced by two
key factors: 1) the high variance and different types of corruption that are
inherent in the signal, 2) the EEG datasets are usually relatively small given
the acquisition cost, annotation cost and amount of effort needed. Data
augmentation approaches for alleviation of this problem have been empirically
studied, with augmentation operations on spatial domain, time domain or
frequency domain handcrafted based on expertise of domain knowledge. In this
work, we propose a principled approach to perform dynamic evolution on the data
for improvement of decoding robustness. The approach is based on
distributionally robust optimization and achieves robustness by optimizing on a
family of evolved data distributions instead of the single training data
distribution. We derived a general data evolution framework based on
Wasserstein gradient flow (WGF) and provides two different forms of evolution
within the framework. Intuitively, the evolution process helps the EEG decoder
to learn more robust and diverse features. It is worth mentioning that the
proposed approach can be readily integrated with other data augmentation
approaches for further improvements. We performed extensive experiments on the
proposed approach and tested its performance on different types of corrupted
EEG signals. The model significantly outperforms competitive baselines on
challenging decoding scenarios.
Related papers
- Theoretically Guaranteed Distribution Adaptable Learning [23.121014921407898]
We propose a novel framework called Distribution Adaptable Learning (DAL)
DAL enables the model to effectively track the evolving data distributions.
It can enhance the reusable and evolvable properties of DAL in accommodating evolving distributions.
arXiv Detail & Related papers (2024-11-05T09:10:39Z) - Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition [17.29489055612668]
This paper proposes Evolvable Fast Adaptation (EvoFA), an online adaptive framework tailored for EEG data.
EvoFA integrates the rapid adaptation of Few-Shot Learning (FSL) and the distribution matching of Domain Adaptation (DA) through a two-stage generalization process.
In the testing phase, a designed evolvable meta-adaptation module iteratively aligns the marginal distribution of target (testing) data with the evolving source (training) data.
arXiv Detail & Related papers (2024-09-24T04:35:10Z) - Data-Centric Long-Tailed Image Recognition [49.90107582624604]
Long-tail models exhibit a strong demand for high-quality data.
Data-centric approaches aim to enhance both the quantity and quality of data to improve model performance.
There is currently a lack of research into the underlying mechanisms explaining the effectiveness of information augmentation.
arXiv Detail & Related papers (2023-11-03T06:34:37Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - An Empirical Study on Distribution Shift Robustness From the Perspective
of Pre-Training and Data Augmentation [91.62129090006745]
This paper studies the distribution shift problem from the perspective of pre-training and data augmentation.
We provide the first comprehensive empirical study focusing on pre-training and data augmentation.
arXiv Detail & Related papers (2022-05-25T13:04:53Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for
Natural Language Understanding [67.61357003974153]
We propose a novel data augmentation framework dubbed CoDA.
CoDA synthesizes diverse and informative augmented examples by integrating multiple transformations organically.
A contrastive regularization objective is introduced to capture the global relationship among all the data samples.
arXiv Detail & Related papers (2020-10-16T23:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.