Source-free Subject Adaptation for EEG-based Visual Recognition
- URL: http://arxiv.org/abs/2301.08448v1
- Date: Fri, 20 Jan 2023 07:01:01 GMT
- Title: Source-free Subject Adaptation for EEG-based Visual Recognition
- Authors: Pilhyeon Lee, Seogkyu Jeon, Sunhee Hwang, Minjung Shin, Hyeran Byun
- Abstract summary: This paper focuses on subject adaptation for EEG-based visual recognition.
It aims at building a visual stimuli recognition system customized for the target subject whose EEG samples are limited.
- Score: 21.02197151821699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on subject adaptation for EEG-based visual recognition. It
aims at building a visual stimuli recognition system customized for the target
subject whose EEG samples are limited, by transferring knowledge from abundant
data of source subjects. Existing approaches consider the scenario that samples
of source subjects are accessible during training. However, it is often
infeasible and problematic to access personal biological data like EEG signals
due to privacy issues. In this paper, we introduce a novel and practical
problem setup, namely source-free subject adaptation, where the source subject
data are unavailable and only the pre-trained model parameters are provided for
subject adaptation. To tackle this challenging problem, we propose
classifier-based data generation to simulate EEG samples from source subjects
using classifier responses. Using the generated samples and target subject
data, we perform subject-independent feature learning to exploit the common
knowledge shared across different subjects. Notably, our framework is
generalizable and can adopt any subject-independent learning method. In the
experiments on the EEG-ImageNet40 benchmark, our model brings consistent
improvements regardless of the choice of subject-independent learning. Also,
our method shows promising performance, recording top-1 test accuracy of 74.6%
under the 5-shot setting even without relying on source data. Our code can be
found at
https://github.com/DeepBCI/Deep-BCI/tree/master/1_Intelligent_BCI/Source_Free_Subject_Adaptation_for _EEG.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - EEGFormer: Towards Transferable and Interpretable Large-Scale EEG
Foundation Model [39.363511340878624]
We present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data.
To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings.
arXiv Detail & Related papers (2024-01-11T17:36:24Z) - Multi-Source Domain Adaptation with Transformer-based Feature Generation
for Subject-Independent EEG-based Emotion Recognition [0.5439020425819]
We propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources.
During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources.
MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
arXiv Detail & Related papers (2024-01-04T16:38:47Z) - Physics Inspired Hybrid Attention for SAR Target Recognition [61.01086031364307]
We propose a physics inspired hybrid attention (PIHA) mechanism and the once-for-all (OFA) evaluation protocol to address the issues.
PIHA leverages the high-level semantics of physical information to activate and guide the feature group aware of local semantics of target.
Our method outperforms other state-of-the-art approaches in 12 test scenarios with same ASC parameters.
arXiv Detail & Related papers (2023-09-27T14:39:41Z) - Inter-subject Contrastive Learning for Subject Adaptive EEG-based Visual
Recognition [20.866855009168606]
This paper tackles the problem of subject adaptive EEG-based visual recognition.
Its goal is to accurately predict the categories of visual stimuli based on EEG signals with only a handful of samples for the target subject during training.
We introduce a novel method that allows for learning subject-independent representation by increasing the similarity of features sharing the same class but coming from different subjects.
arXiv Detail & Related papers (2022-02-07T01:34:57Z) - Subject Adaptive EEG-based Visual Recognition [14.466626957417864]
This paper focuses on EEG-based visual recognition, aiming to predict the visual object class observed by a subject based on his/her EEG signals.
One of the main challenges is the large variation between signals from different subjects.
We introduce a novel problem setting, namely subject adaptive EEG-based visual recognition.
arXiv Detail & Related papers (2021-10-26T08:06:55Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - BENDR: using transformers and a contrastive self-supervised learning
task to learn from massive amounts of EEG data [15.71234837305808]
We consider how to adapt techniques and architectures used for language modelling (LM) to encephalography modelling (EM)
We find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware.
Both the internal representations of this model and the entire architecture can be fine-tuned to a variety of downstream BCI and EEG classification tasks.
arXiv Detail & Related papers (2021-01-28T14:54:01Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.