Confidence-Aware Subject-to-Subject Transfer Learning for Brain-Computer
Interface
- URL: http://arxiv.org/abs/2112.09243v1
- Date: Wed, 15 Dec 2021 15:23:23 GMT
- Title: Confidence-Aware Subject-to-Subject Transfer Learning for Brain-Computer
Interface
- Authors: Dong-Kyun Han, Serkan Musellim, Dong-Young Kim
- Abstract summary: The inter/intra-subject variability of electroencephalography (EEG) makes the practical use of the brain-computer interface (BCI) difficult.
We propose a BCI framework using only high-confidence subjects for TL training.
In our framework, a deep neural network selects useful subjects for the TL process and excludes noisy subjects, using a co-teaching algorithm based on the small-loss trick.
- Score: 3.2550305883611244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inter/intra-subject variability of electroencephalography (EEG) makes the
practical use of the brain-computer interface (BCI) difficult. In general, the
BCI system requires a calibration procedure to tune the model every time the
system is used. This problem is recognized as a major obstacle to BCI, and to
overcome it, approaches based on transfer learning (TL) have recently emerged.
However, many BCI paradigms are limited in that they consist of a structure
that shows labels first and then measures "imagery", the negative effects of
source subjects containing data that do not contain control signals have been
ignored in many cases of the subject-to-subject TL process. The main purpose of
this paper is to propose a method of excluding subjects that are expected to
have a negative impact on subject-to-subject TL training, which generally uses
data from as many subjects as possible. In this paper, we proposed a BCI
framework using only high-confidence subjects for TL training. In our
framework, a deep neural network selects useful subjects for the TL process and
excludes noisy subjects, using a co-teaching algorithm based on the small-loss
trick. We experimented with leave-one-subject-out validation on two public
datasets (2020 international BCI competition track 4 and OpenBMI dataset). Our
experimental results showed that confidence-aware TL, which selects subjects
with small loss instances, improves the generalization performance of BCI.
Related papers
- Enhancing Information Maximization with Distance-Aware Contrastive
Learning for Source-Free Cross-Domain Few-Shot Learning [55.715623885418815]
Cross-Domain Few-Shot Learning methods require access to source domain data to train a model in the pre-training phase.
Due to increasing concerns about data privacy and the desire to reduce data transmission and training costs, it is necessary to develop a CDFSL solution without accessing source data.
This paper proposes an Enhanced Information Maximization with Distance-Aware Contrastive Learning method to address these challenges.
arXiv Detail & Related papers (2024-03-04T12:10:24Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - A Novel Semi-supervised Meta Learning Method for Subject-transfer
Brain-computer Interface [7.372748737217638]
We propose a semi-supervised meta learning (S) method for subject-transfer learning in BCIs.
The proposed S learns a meta model with the existing subjects first, then fine-tunes the model in a semi-supervised learning manner.
It is significant for BCI applications where the labeled data are scarce or expensive while unlabeled data are readily available.
arXiv Detail & Related papers (2022-09-07T15:38:57Z) - Repairing Brain-Computer Interfaces with Fault-Based Data Acquisition [0.9697877942346906]
Brain-computer interfaces (BCIs) decode recorded neural signals from the brain and/or stimulate the brain with encoded neural signals.
BCIs have not yet been adopted for long-term, day-to-day use because of challenges related to reliability and robustness.
This paper presents a new methodology for characterizing, detecting, and localizing faults in BCIs.
arXiv Detail & Related papers (2022-03-20T23:49:50Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Minimizing subject-dependent calibration for BCI with Riemannian
transfer learning [0.8399688944263843]
We present a scheme to train a classifier on data recorded from different subjects, to reduce the calibration while preserving good performances.
To demonstrate the robustness of this approach, we conducted a meta-analysis on multiple datasets for three BCI paradigms.
arXiv Detail & Related papers (2021-11-23T18:37:58Z) - Toward Real-World BCI: CCSPNet, A Compact Subject-Independent Motor
Imagery Framework [2.0741711594051377]
A conventional brain-computer interface (BCI) requires a complete data gathering, training, and calibration phase for each user before it can be used.
We propose a novel subject-independent BCI framework named CCSPNet that is trained on the motor imagery (MI) paradigm of a large-scale EEG signals database.
The proposed framework applies a wavelet kernel convolutional neural network (WKCNN) and a temporal convolutional neural network (TCNN) in order to represent and extract the diverse spectral features of EEG signals.
arXiv Detail & Related papers (2020-12-25T12:00:47Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - A Simple but Tough-to-Beat Data Augmentation Approach for Natural
Language Understanding and Generation [53.8171136907856]
We introduce a set of simple yet effective data augmentation strategies dubbed cutoff.
cutoff relies on sampling consistency and thus adds little computational overhead.
cutoff consistently outperforms adversarial training and achieves state-of-the-art results on the IWSLT2014 German-English dataset.
arXiv Detail & Related papers (2020-09-29T07:08:35Z) - Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A
Complete Pipeline [54.73337667795997]
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject.
This paper proposes that TL could be considered in all three components (spatial filtering, feature engineering, and classification) of MI-based BCIs.
arXiv Detail & Related papers (2020-07-03T23:44:21Z) - Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of
Progress Made Since 2016 [35.68916211292525]
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals.
EEG is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity.
It is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects.
arXiv Detail & Related papers (2020-04-13T16:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.