Towards a Fast Steady-State Visual Evoked Potentials (SSVEP)
Brain-Computer Interface (BCI)
- URL: http://arxiv.org/abs/2002.01171v2
- Date: Tue, 12 May 2020 05:40:04 GMT
- Title: Towards a Fast Steady-State Visual Evoked Potentials (SSVEP)
Brain-Computer Interface (BCI)
- Authors: Aung Aung Phyo Wai, Yangsong Zhang, Heng Guo, Ying Chi, Lei Zhang,
Xian-Sheng Hua, Seong Whan Lee and Cuntai Guan
- Abstract summary: We propose a training-free method by combining spatial-filtering and temporal alignment (CSTA) to recognize SSVEP responses in sub-second response time.
CSTA exploits linear correlation and non-linear similarity between steady-state responses and stimulus templates with complementary fusion to achieve desirable performance improvements.
- Score: 46.83815094477545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steady-state visual evoked potentials (SSVEP) brain-computer interface (BCI)
provides reliable responses leading to high accuracy and information
throughput. But achieving high accuracy typically requires a relatively long
time window of one second or more. Various methods were proposed to improve
sub-second response accuracy through subject-specific training and calibration.
Substantial performance improvements were achieved with tedious calibration and
subject-specific training; resulting in the user's discomfort. So, we propose a
training-free method by combining spatial-filtering and temporal alignment
(CSTA) to recognize SSVEP responses in sub-second response time. CSTA exploits
linear correlation and non-linear similarity between steady-state responses and
stimulus templates with complementary fusion to achieve desirable performance
improvements. We evaluated the performance of CSTA in terms of accuracy and
Information Transfer Rate (ITR) in comparison with both training-based and
training-free methods using two SSVEP data-sets. We observed that CSTA achieves
the maximum mean accuracy of 97.43$\pm$2.26 % and 85.71$\pm$13.41 % with
four-class and forty-class SSVEP data-sets respectively in sub-second response
time in offline analysis. CSTA yields significantly higher mean performance
(p<0.001) than the training-free method on both data-sets. Compared with
training-based methods, CSTA shows 29.33$\pm$19.65 % higher mean accuracy with
statistically significant differences in time window less than 0.5 s. In longer
time windows, CSTA exhibits either better or comparable performance though not
statistically significantly better than training-based methods. We show that
the proposed method brings advantages of subject-independent SSVEP
classification without requiring training while enabling high target
recognition performance in sub-second response time.
Related papers
- Impact of Hyperparameter Optimization on the Accuracy of Lightweight Deep Learning Models for Real-Time Image Classification [0.0]
This work analyzes the influence of hyper parameter adjustment on the accuracy and convergence behavior of seven efficient deep learning architectures.<n>All models are trained on the ImageNet-1K dataset under consistent training settings.<n>Results demonstrate that cosine learning rate decay and adjustable batch size may greatly boost both accuracy and convergence speed.
arXiv Detail & Related papers (2025-07-31T07:47:30Z) - SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning [42.764994681999774]
Self-adaptive Thresholding (SST) is a novel, effective, and efficient SSL framework.<n>SST adjusts class-specific thresholds based on the model's learning progress.<n>Semi-SST-ViT-Huge achieves the best results on competitive ImageNet-1K SSL benchmarks.
arXiv Detail & Related papers (2025-05-31T08:34:04Z) - FIESTA: Fisher Information-based Efficient Selective Test-time Adaptation [2.876586838098149]
This paper introduces a novel Fisher-driven selective adaptation framework that dynamically identifies and updates only the most critical model parameters.
Experiments on the challenging AffWild2 benchmark demonstrate that our approach significantly outperforms existing TTA methods.
The proposed approach not only enhances recognition accuracy but also dramatically reduces computational overhead, making test-time adaptation more practical for real-world affective computing applications.
arXiv Detail & Related papers (2025-03-29T23:56:32Z) - SPEQ: Offline Stabilization Phases for Efficient Q-Learning in High Update-To-Data Ratio Reinforcement Learning [51.10866035483686]
High update-to-data (UTD) ratio algorithms in reinforcement learning (RL) improve sample efficiency but incur high computational costs, limiting real-world scalability.
We propose Offline Stabilization Phases for Efficient Q-Learning (SPEQ), an RL algorithm that combines low-UTD online training with periodic offline stabilization phases.
During these phases, Q-functions are fine-tuned with high UTD ratios on a fixed replay buffer, reducing redundant updates on suboptimal data.
arXiv Detail & Related papers (2025-01-15T09:04:19Z) - Optimizing Data Curation through Spectral Analysis and Joint Batch Selection (SALN) [0.0]
This paper introduces SALN, a method designed to prioritize and select samples within each batch rather than from the entire dataset.
The proposed method applies a spectral analysis-based to identify the most informative data points within each batch, improving both training speed and accuracy.
It demonstrates up to an 8x reduction in training time and up to a 5% increase in accuracy over standard training methods.
arXiv Detail & Related papers (2024-12-22T15:38:36Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Efficient yet Competitive Speech Translation: FBK@IWSLT2022 [16.863166577241152]
We show that a simple method that looks at the ratio between source and target characters yields a quality improvement of 1 BLEU.
Towards the same goal of training cost reduction, we participate in the simultaneous task with the same model trained for offline ST.
The effectiveness of our lightweight training strategy is shown by the high score obtained on the MuST-C en-de corpus (26.7 BLEU)
arXiv Detail & Related papers (2022-05-05T13:13:48Z) - An Adaptive Task-Related Component Analysis Method for SSVEP recognition [0.913755431537592]
Steady-state visual evoked potential (SSVEP) recognition methods are equipped with learning from the subject's calibration data.
This study develops a new method to learn from limited calibration data.
arXiv Detail & Related papers (2022-04-17T15:12:40Z) - Self-critical Sequence Training for Automatic Speech Recognition [25.06635361326706]
We propose an optimization method called self-critical sequence training (SCST) to make the training procedure much closer to the testing phase.
As a reinforcement learning (RL) based method, SCST utilizes a customized reward function to associate the training criterion and WER.
We conducted experiments on both clean and noisy speech datasets, and the results show that the proposed SCST respectively achieves 8.7% and 7.8% relative improvements over the baseline in terms of WER.
arXiv Detail & Related papers (2022-04-13T09:13:32Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z) - SASL: Saliency-Adaptive Sparsity Learning for Neural Network
Acceleration [20.92912642901645]
We propose a Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization.
Our method can reduce 49.7% FLOPs of ResNet-50 with very negligible 0.39% top-1 and 0.05% top-5 accuracy degradation.
arXiv Detail & Related papers (2020-03-12T16:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.