Minimizing subject-dependent calibration for BCI with Riemannian
transfer learning
- URL: http://arxiv.org/abs/2111.12071v1
- Date: Tue, 23 Nov 2021 18:37:58 GMT
- Title: Minimizing subject-dependent calibration for BCI with Riemannian
transfer learning
- Authors: Salim Khazem and Sylvain Chevallier and Quentin Barth\'elemy and Karim
Haroun and Camille No\^us
- Abstract summary: We present a scheme to train a classifier on data recorded from different subjects, to reduce the calibration while preserving good performances.
To demonstrate the robustness of this approach, we conducted a meta-analysis on multiple datasets for three BCI paradigms.
- Score: 0.8399688944263843
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Calibration is still an important issue for user experience in Brain-Computer
Interfaces (BCI). Common experimental designs often involve a lengthy training
period that raises the cognitive fatigue, before even starting to use the BCI.
Reducing or suppressing this subject-dependent calibration is possible by
relying on advanced machine learning techniques, such as transfer learning.
Building on Riemannian BCI, we present a simple and effective scheme to train a
classifier on data recorded from different subjects, to reduce the calibration
while preserving good performances. The main novelty of this paper is to
propose a unique approach that could be applied on very different paradigms. To
demonstrate the robustness of this approach, we conducted a meta-analysis on
multiple datasets for three BCI paradigms: event-related potentials (P300),
motor imagery and SSVEP. Relying on the MOABB open source framework to ensure
the reproducibility of the experiments and the statistical analysis, the
results clearly show that the proposed approach could be applied on any kind of
BCI paradigm and in most of the cases to significantly improve the classifier
reliability. We point out some key features to further improve transfer
learning methods.
Related papers
- A Bayesian Approach to Data Point Selection [24.98069363998565]
Data point selection (DPS) is becoming a critical topic in deep learning.
Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation.
We propose a novel Bayesian approach to DPS.
arXiv Detail & Related papers (2024-11-06T09:04:13Z) - Exploring new territory: Calibration-free decoding for c-VEP BCI [0.0]
This study explores two zero-training methods aimed at enhancing the usability of brain-computer interfaces (BCIs)
We introduce a novel method rooted in the event-related potential (ERP) domain, unsupervised mean (UMM)
We compare UMM to the state-of-the-art c-VEP zero-training method that uses canonical correlation analysis (CCA)
arXiv Detail & Related papers (2024-03-22T13:20:33Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - Majorization-Minimization for sparse SVMs [46.99165837639182]
Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework, several decades ago.
They often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena.
In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization.
arXiv Detail & Related papers (2023-08-31T17:03:16Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - A Novel Semi-supervised Meta Learning Method for Subject-transfer
Brain-computer Interface [7.372748737217638]
We propose a semi-supervised meta learning (S) method for subject-transfer learning in BCIs.
The proposed S learns a meta model with the existing subjects first, then fine-tunes the model in a semi-supervised learning manner.
It is significant for BCI applications where the labeled data are scarce or expensive while unlabeled data are readily available.
arXiv Detail & Related papers (2022-09-07T15:38:57Z) - Confidence-Aware Subject-to-Subject Transfer Learning for Brain-Computer
Interface [3.2550305883611244]
The inter/intra-subject variability of electroencephalography (EEG) makes the practical use of the brain-computer interface (BCI) difficult.
We propose a BCI framework using only high-confidence subjects for TL training.
In our framework, a deep neural network selects useful subjects for the TL process and excludes noisy subjects, using a co-teaching algorithm based on the small-loss trick.
arXiv Detail & Related papers (2021-12-15T15:23:23Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.