A Novel Semi-supervised Meta Learning Method for Subject-transfer
Brain-computer Interface
- URL: http://arxiv.org/abs/2209.03785v1
- Date: Wed, 7 Sep 2022 15:38:57 GMT
- Title: A Novel Semi-supervised Meta Learning Method for Subject-transfer
Brain-computer Interface
- Authors: Jingcong Li, Fei Wang, Haiyun Huang, Feifei Qi, Jiahui Pan
- Abstract summary: We propose a semi-supervised meta learning (S) method for subject-transfer learning in BCIs.
The proposed S learns a meta model with the existing subjects first, then fine-tunes the model in a semi-supervised learning manner.
It is significant for BCI applications where the labeled data are scarce or expensive while unlabeled data are readily available.
- Score: 7.372748737217638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain-computer interface (BCI) provides a direct communication pathway
between human brain and external devices. Before a new subject could use BCI, a
calibration procedure is usually required. Because the inter- and intra-subject
variances are so large that the models trained by the existing subjects perform
poorly on new subjects. Therefore, effective subject-transfer and calibration
method is essential. In this paper, we propose a semi-supervised meta learning
(SSML) method for subject-transfer learning in BCIs. The proposed SSML learns a
meta model with the existing subjects first, then fine-tunes the model in a
semi-supervised learning manner, i.e. using few labeled and many unlabeled
samples of target subject for calibration. It is significant for BCI
applications where the labeled data are scarce or expensive while unlabeled
data are readily available. To verify the SSML method, three different BCI
paradigms are tested: 1) event-related potential detection; 2) emotion
recognition; and 3) sleep staging. The SSML achieved significant improvements
of over 15% on the first two paradigms and 4.9% on the third. The experimental
results demonstrated the effectiveness and potential of the SSML method in BCI
applications.
Related papers
- Deep Unlearn: Benchmarking Machine Unlearning [7.450700594277741]
Machine unlearning (MU) aims to remove the influence of particular data points from the learnable parameters of a trained machine learning model.
This paper investigates 18 state-of-the-art MU methods across various benchmark datasets and models.
arXiv Detail & Related papers (2024-10-02T06:41:58Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models [71.78800549517298]
Continual learning (CL) ability is vital for deploying large language models (LLMs) in the dynamic world.
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input.
We propose a novel Shared Attention Framework (SAPT) to align the PET learning and selection via the Shared Attentive Learning & Selection module.
arXiv Detail & Related papers (2024-01-16T11:45:03Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - MetaVA: Curriculum Meta-learning and Pre-fine-tuning of Deep Neural
Networks for Detecting Ventricular Arrhythmias based on ECGs [9.600976281032862]
Ventricular arrhythmias (VA) are the main causes of sudden cardiac death.
We propose a novel model agnostic meta-learning (MAML) with curriculum learning (CL) method to solve group-level diversity.
We conduct experiments using a combination of three publicly available ECG datasets.
arXiv Detail & Related papers (2022-02-25T01:26:19Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Confidence-Aware Subject-to-Subject Transfer Learning for Brain-Computer
Interface [3.2550305883611244]
The inter/intra-subject variability of electroencephalography (EEG) makes the practical use of the brain-computer interface (BCI) difficult.
We propose a BCI framework using only high-confidence subjects for TL training.
In our framework, a deep neural network selects useful subjects for the TL process and excludes noisy subjects, using a co-teaching algorithm based on the small-loss trick.
arXiv Detail & Related papers (2021-12-15T15:23:23Z) - Modality-Aware Triplet Hard Mining for Zero-shot Sketch-Based Image
Retrieval [51.42470171051007]
This paper tackles the Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) problem from the viewpoint of cross-modality metric learning.
By combining two fundamental learning approaches in DML, e.g., classification training and pairwise training, we set up a strong baseline for ZS-SBIR.
We show that Modality-Aware Triplet Hard Mining (MATHM) enhances the baseline with three types of pairwise learning.
arXiv Detail & Related papers (2021-12-15T08:36:44Z) - Minimizing subject-dependent calibration for BCI with Riemannian
transfer learning [0.8399688944263843]
We present a scheme to train a classifier on data recorded from different subjects, to reduce the calibration while preserving good performances.
To demonstrate the robustness of this approach, we conducted a meta-analysis on multiple datasets for three BCI paradigms.
arXiv Detail & Related papers (2021-11-23T18:37:58Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.