Improving SSVEP BCI Spellers With Data Augmentation and Language Models
- URL: http://arxiv.org/abs/2412.20052v1
- Date: Sat, 28 Dec 2024 06:52:11 GMT
- Title: Improving SSVEP BCI Spellers With Data Augmentation and Language Models
- Authors: Joseph Zhang, Ruiming Zhang, Kipngeno Koech, David Hill, Kateryna Shapovalenko,
- Abstract summary: SSVEP spellers are a promising communication tool for individuals with disabilities.
Deep neural networks for SSVEP spellers often suffer from low accuracy and poor generalizability to unseen subjects.
We propose a hybrid approach combining data augmentation and language modeling to enhance the performance of SSVEP spellers.
- Score: 0.0
- License:
- Abstract: Steady-State Visual Evoked Potential (SSVEP) spellers are a promising communication tool for individuals with disabilities. This Brain-Computer Interface utilizes scalp potential data from (electroencephalography) EEG electrodes on a subject's head to decode specific letters or arbitrary targets the subject is looking at on a screen. However, deep neural networks for SSVEP spellers often suffer from low accuracy and poor generalizability to unseen subjects, largely due to the high variability in EEG data. In this study, we propose a hybrid approach combining data augmentation and language modeling to enhance the performance of SSVEP spellers. Using the Benchmark dataset from Tsinghua University, we explore various data augmentation techniques, including frequency masking, time masking, and noise injection, to improve the robustness of deep learning models. Additionally, we integrate a language model (CharRNN) with EEGNet to incorporate linguistic context, significantly enhancing word-level decoding accuracy. Our results demonstrate accuracy improvements of up to 2.9 percent over the baseline, with time masking and language modeling showing the most promise. This work paves the way for more accurate and generalizable SSVEP speller systems, offering improved communication solutions for individuals with disabilities.
Related papers
- Decoding EEG Speech Perception with Transformers and VAE-based Data Augmentation [6.405846203953988]
Decoding speech from electroencephalography (EEG) has the potential to advance brain-computer interfaces (BCIs)
EEG-based speech decoding faces major challenges, such as noisy data, limited datasets, and poor performance on complex tasks like speech perception.
This study attempts to address these challenges by employing variational autoencoders (VAEs) for EEG data augmentation to improve data quality.
arXiv Detail & Related papers (2025-01-08T08:55:10Z) - Evaluation Of P300 Speller Performance Using Large Language Models Along With Cross-Subject Training [2.231337550816627]
The P300 speller brain computer interface (BCI) offers an alternative communication medium by leveraging a subject's EEG response to characters.
This study builds on that theme by addressing key limitations, particularly in the training of multi-subjects.
arXiv Detail & Related papers (2024-10-19T17:23:16Z) - Towards Linguistic Neural Representation Learning and Sentence Retrieval from Electroencephalogram Recordings [27.418738450536047]
We propose a two-step pipeline for converting EEG signals into sentences.
We first confirm that word-level semantic information can be learned from EEG data recorded during natural reading.
We employ a training-free retrieval method to retrieve sentences based on the predictions from the EEG encoder.
arXiv Detail & Related papers (2024-08-08T03:40:25Z) - XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems
to Improve Language Understanding [73.24847320536813]
This study explores distilling visual information from pretrained multimodal transformers to pretrained language encoders.
Our framework is inspired by cross-modal encoders' success in visual-language tasks while we alter the learning objective to cater to the language-heavy characteristics of NLU.
arXiv Detail & Related papers (2022-04-15T03:44:00Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - Improving Classifier Training Efficiency for Automatic Cyberbullying
Detection with Feature Density [58.64907136562178]
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods.
We hypothesise that estimating dataset complexity allows for the reduction of the number of required experiments.
The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
arXiv Detail & Related papers (2021-11-02T15:48:28Z) - Factorized Neural Transducer for Efficient Language Model Adaptation [51.81097243306204]
We propose a novel model, factorized neural Transducer, by factorizing the blank and vocabulary prediction.
It is expected that this factorization can transfer the improvement of the standalone language model to the Transducer for speech recognition.
We demonstrate that the proposed factorized neural Transducer yields 15% to 20% WER improvements when out-of-domain text data is used for language model adaptation.
arXiv Detail & Related papers (2021-09-27T15:04:00Z) - Decoding EEG Brain Activity for Multi-Modal Natural Language Processing [9.35961671939495]
We present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks.
We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal.
For a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines.
arXiv Detail & Related papers (2021-02-17T09:44:21Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z) - Data Augmentation for Spoken Language Understanding via Pretrained
Language Models [113.56329266325902]
Training of spoken language understanding (SLU) models often faces the problem of data scarcity.
We put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated utterances.
arXiv Detail & Related papers (2020-04-29T04:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.