Boosting Template-based SSVEP Decoding by Cross-domain Transfer Learning
- URL: http://arxiv.org/abs/2102.05194v1
- Date: Wed, 10 Feb 2021 00:14:06 GMT
- Title: Boosting Template-based SSVEP Decoding by Cross-domain Transfer Learning
- Authors: Kuan-Jung Chiang, Chun-Shu Wei, Masaki Nakanishi and Tzyy-Ping Jung
- Abstract summary: We enhance the state-of-the-art template-based SSVEP decoding through incorporating a least-squares transformation (LST)-based transfer learning.
Study results verified the efficacy of LST in obviating the variability of SSVEPs when transferring existing data across domains.
- Score: 2.454595178503407
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Objective: This study aims to establish a generalized transfer-learning
framework for boosting the performance of steady-state visual evoked potential
(SSVEP)-based brain-computer interfaces (BCIs) by leveraging cross-domain data
transferring. Approach: We enhanced the state-of-the-art template-based SSVEP
decoding through incorporating a least-squares transformation (LST)-based
transfer learning to leverage calibration data across multiple domains
(sessions, subjects, and EEG montages). Main results: Study results verified
the efficacy of LST in obviating the variability of SSVEPs when transferring
existing data across domains. Furthermore, the LST-based method achieved
significantly higher SSVEP-decoding accuracy than the standard task-related
component analysis (TRCA)-based method and the non-LST naive transfer-learning
method. Significance: This study demonstrated the capability of the LST-based
transfer learning to leverage existing data across subjects and/or devices with
an in-depth investigation of its rationale and behavior in various
circumstances. The proposed framework significantly improved the SSVEP decoding
accuracy over the standard TRCA approach when calibration data are limited. Its
performance in calibration reduction could facilitate plug-and-play SSVEP-based
BCIs and further practical applications.
Related papers
- iFuzzyTL: Interpretable Fuzzy Transfer Learning for SSVEP BCI System [24.898026682692688]
This study explores advanced classification techniques leveraging interpretable fuzzy transfer learning (iFuzzyTL)
iFuzzyTL refines input signal processing and classification in a human-interpretable format by integrating fuzzy inference systems and attention mechanisms.
The model's efficacy is demonstrated across three datasets.
arXiv Detail & Related papers (2024-10-16T06:07:23Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact
Detection with Self-Supervised Learning [0.0]
This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from this data.
Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations.
This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
arXiv Detail & Related papers (2024-01-02T04:00:48Z) - SSVEP-DAN: A Data Alignment Network for SSVEP-based Brain Computer
Interfaces [2.1192321523349404]
Steady-state visual-evoked potential (SSVEP)-based brain-computer interfaces (BCIs) offer a non-invasive means of communication through high-speed speller systems.
We present SSVEP-DAN, the first dedicated neural network model designed for aligning SSVEP data across different domains.
arXiv Detail & Related papers (2023-11-21T15:18:29Z) - PMU measurements based short-term voltage stability assessment of power
systems via deep transfer learning [2.1303885995425635]
This paper proposes a novel phasor measurement unit (PMU) measurements-based STVSA method by using deep transfer learning.
It employs temporal ensembling for sample labeling and utilizes least squares generative adversarial networks (LSGAN) for data augmentation, enabling effective deep learning on small-scale datasets.
Experimental results on the IEEE 39-bus test system demonstrate that the proposed method improves model evaluation accuracy by approximately 20% through transfer learning.
arXiv Detail & Related papers (2023-08-07T23:44:35Z) - An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization [52.44068740462729]
We present an information-theoretic perspective on the VICReg objective.
We derive a generalization bound for VICReg, revealing its inherent advantages for downstream tasks.
We introduce a family of SSL methods derived from information-theoretic principles that outperform existing SSL techniques.
arXiv Detail & Related papers (2023-03-01T16:36:25Z) - An Adaptive Task-Related Component Analysis Method for SSVEP recognition [0.913755431537592]
Steady-state visual evoked potential (SSVEP) recognition methods are equipped with learning from the subject's calibration data.
This study develops a new method to learn from limited calibration data.
arXiv Detail & Related papers (2022-04-17T15:12:40Z) - A Variational Bayesian Approach to Learning Latent Variables for
Acoustic Knowledge Transfer [55.20627066525205]
We propose a variational Bayesian (VB) approach to learning distributions of latent variables in deep neural network (DNN) models.
Our proposed VB approach can obtain good improvements on target devices, and consistently outperforms 13 state-of-the-art knowledge transfer algorithms.
arXiv Detail & Related papers (2021-10-16T15:54:01Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.