Transfer Learning of an Ensemble of DNNs for SSVEP BCI Spellers without
User-Specific Training
- URL: http://arxiv.org/abs/2209.01511v1
- Date: Sat, 3 Sep 2022 23:24:47 GMT
- Title: Transfer Learning of an Ensemble of DNNs for SSVEP BCI Spellers without
User-Specific Training
- Authors: Osman Berke Guney, Huseyin Ozkan
- Abstract summary: Current high performing SSVEP BCI spellers require an initial lengthy and tiring user-specific training for each new user for system adaptation.
To ensure practicality, we propose a highly novel target identification method based on an ensemble of deep neural networks (DNNs)
We exploit already-existing literature datasets from participants of previously conducted EEG experiments to train a global target identifier DNN first.
We transfer this ensemble of fine-tuned DNNs to the new user instance, determine the k most representative DNNs according to the participants' statistical similarities to the new user, and predict the target character through a weighted combination of
- Score: 3.6144103736375857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Steady-state visually evoked potentials (SSVEPs), measured with
EEG (electroencephalogram), yield decent information transfer rates (ITR) in
brain-computer interface (BCI) spellers. However, the current high performing
SSVEP BCI spellers in the literature require an initial lengthy and tiring
user-specific training for each new user for system adaptation, including data
collection with EEG experiments, algorithm training and calibration (all are
before the actual use of the system). This impedes the widespread use of BCIs.
To ensure practicality, we propose a highly novel target identification method
based on an ensemble of deep neural networks (DNNs), which does not require any
sort of user-specific training. Method: We exploit already-existing literature
datasets from participants of previously conducted EEG experiments to train a
global target identifier DNN first, which is then fine-tuned to each
participant. We transfer this ensemble of fine-tuned DNNs to the new user
instance, determine the k most representative DNNs according to the
participants' statistical similarities to the new user, and predict the target
character through a weighted combination of the ensemble predictions. Results:
On two large-scale benchmark and BETA datasets, our method achieves impressive
155.51 bits/min and 114.64 bits/min ITRs. Code is available for
reproducibility: https://github.com/osmanberke/Ensemble-of-DNNs Conclusion: The
proposed method significantly outperforms all the state-of-the-art alternatives
for all stimulation durations in [0.2-1.0] seconds on both datasets.
Significance: Our Ensemble-DNN method has the potential to promote the
practical widespread deployment of BCI spellers in daily lives as we provide
the highest performance while enabling the immediate system use without any
user-specific training.
Related papers
- BiDense: Binarization for Dense Prediction [62.70804353158387]
BiDense is a generalized binary neural network (BNN) designed for efficient and accurate dense prediction tasks.
BiDense incorporates two key techniques: the Distribution-adaptive Binarizer (DAB) and the Channel-adaptive Full-precision Bypass (CFB)
arXiv Detail & Related papers (2024-11-15T16:46:04Z) - Source-Free Domain Adaptation for SSVEP-based Brain-Computer Interfaces [1.4364491422470593]
This paper presents a source free domain adaptation method for steady-state visually evoked potentials (SSVEP) based brain-computer interface (BCI) spellers.
achieving a high information transfer rate (ITR) in most prominent methods requires an extensive calibration period before using the system.
We propose a novel method that adapts a powerful deep neural network (DNN) pre-trained on data from source domains to the new user.
arXiv Detail & Related papers (2023-05-27T08:02:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Training ELECTRA Augmented with Multi-word Selection [53.77046731238381]
We present a new text encoder pre-training method that improves ELECTRA based on multi-task learning.
Specifically, we train the discriminator to simultaneously detect replaced tokens and select original tokens from candidate sets.
arXiv Detail & Related papers (2021-05-31T23:19:00Z) - Evaluating Deep Neural Network Ensembles by Majority Voting cum
Meta-Learning scheme [3.351714665243138]
We propose an ensemble of seven independent Deep Neural Networks (DNNs) for a new data instance.
One-seventh of the data is deleted and replenished by bootstrap sampling from the remaining samples.
All the algorithms in this paper have been tested on five benchmark datasets.
arXiv Detail & Related papers (2021-05-09T03:10:56Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - A Deep Neural Network for SSVEP-based Brain-Computer Interfaces [3.0595138995552746]
Target identification in brain-computer interface (BCI) spellers refers to the electroencephalogram (EEG) classification for predicting the target character that the subject intends to spell.
In this setting, we address the target identification and propose a novel deep neural network (DNN) architecture.
The proposed DNN processes the multi-channel SSVEP with convolutions across the sub-bands of harmonics, channels, time, and classifies at the fully connected layer.
arXiv Detail & Related papers (2020-11-17T11:11:19Z) - Transfer Learning and SpecAugment applied to SSVEP Based BCI
Classification [1.9336815376402716]
We use deep convolutional neural networks (DCNNs) to classify EEG signals in a single-channel brain-computer interface (BCI)
EEG signals were converted to spectrograms and served as input to train DCNNs using the transfer learning technique.
arXiv Detail & Related papers (2020-10-08T00:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.