Source-Free Domain Adaptation for SSVEP-based Brain-Computer Interfaces
- URL: http://arxiv.org/abs/2305.17403v2
- Date: Sun, 19 Nov 2023 11:07:59 GMT
- Title: Source-Free Domain Adaptation for SSVEP-based Brain-Computer Interfaces
- Authors: Osman Berke Guney, Deniz Kucukahmetler and Huseyin Ozkan
- Abstract summary: This paper presents a source free domain adaptation method for steady-state visually evoked potentials (SSVEP) based brain-computer interface (BCI) spellers.
achieving a high information transfer rate (ITR) in most prominent methods requires an extensive calibration period before using the system.
We propose a novel method that adapts a powerful deep neural network (DNN) pre-trained on data from source domains to the new user.
- Score: 1.4364491422470593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a source free domain adaptation method for steady-state
visually evoked potentials (SSVEP) based brain-computer interface (BCI)
spellers. SSVEP-based BCI spellers assist individuals experiencing speech
difficulties by enabling them to communicate at a fast rate. However, achieving
a high information transfer rate (ITR) in most prominent methods requires an
extensive calibration period before using the system, leading to discomfort for
new users. We address this issue by proposing a novel method that adapts a
powerful deep neural network (DNN) pre-trained on data from source domains
(data from former users or participants of previous experiments) to the new
user (target domain), based only on the unlabeled target data. This adaptation
is achieved by minimizing our proposed custom loss function composed of
self-adaptation and local-regularity terms. The self-adaptation term uses the
pseudo-label strategy, while the novel local-regularity term exploits the data
structure and forces the DNN to assign similar labels to adjacent instances.
The proposed method priorities user comfort by removing the burden of
calibration while maintaining an excellent character identification accuracy
and ITR. In particular, our method achieves striking 201.15 bits/min and 145.02
bits/min ITRs on the benchmark and BETA datasets, respectively, and outperforms
the state-of-the-art alternatives. Our code is available at
https://github.com/osmanberke/SFDA-SSVEP-BCI
Related papers
- Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for
Source-Relaxed Medical Image Segmentation [13.260109561599904]
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to an unlabeled heterogeneous target domain.
We propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation.
arXiv Detail & Related papers (2022-09-16T13:13:50Z) - Transfer Learning of an Ensemble of DNNs for SSVEP BCI Spellers without
User-Specific Training [3.6144103736375857]
Current high performing SSVEP BCI spellers require an initial lengthy and tiring user-specific training for each new user for system adaptation.
To ensure practicality, we propose a highly novel target identification method based on an ensemble of deep neural networks (DNNs)
We exploit already-existing literature datasets from participants of previously conducted EEG experiments to train a global target identifier DNN first.
We transfer this ensemble of fine-tuned DNNs to the new user instance, determine the k most representative DNNs according to the participants' statistical similarities to the new user, and predict the target character through a weighted combination of
arXiv Detail & Related papers (2022-09-03T23:24:47Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Generalizable Person Re-Identification via Self-Supervised Batch Norm
Test-Time Adaption [63.7424680360004]
Batch Norm Test-time Adaption (BNTA) is a novel re-id framework that applies the self-supervised strategy to update BN parameters adaptively.
BNTA explores the domain-aware information within unlabeled target data before inference, and accordingly modulates the feature distribution normalized by BN to adapt to the target domain.
arXiv Detail & Related papers (2022-03-01T18:46:32Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.