End-to-End Deep Transfer Learning for Calibration-free Motor Imagery
Brain Computer Interfaces
- URL: http://arxiv.org/abs/2307.12827v1
- Date: Mon, 24 Jul 2023 14:24:17 GMT
- Title: End-to-End Deep Transfer Learning for Calibration-free Motor Imagery
Brain Computer Interfaces
- Authors: Maryam Alimardani and Steven Kocken and Nikki Leeuwis
- Abstract summary: Major issue in Motor Imagery Brain-Computer Interfaces (MI-BCIs) is their poor classification accuracy and the large amount of data that is required for subject-specific calibration.
This study employed deep transfer learning for development of calibration-free subject-independent BCIs.
Three deep learning models (MIN2Net, EEGNet and DeepConvNet) were trained and compared using an openly available dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A major issue in Motor Imagery Brain-Computer Interfaces (MI-BCIs) is their
poor classification accuracy and the large amount of data that is required for
subject-specific calibration. This makes BCIs less accessible to general users
in out-of-the-lab applications. This study employed deep transfer learning for
development of calibration-free subject-independent MI-BCI classifiers. Unlike
earlier works that applied signal preprocessing and feature engineering steps
in transfer learning, this study adopted an end-to-end deep learning approach
on raw EEG signals. Three deep learning models (MIN2Net, EEGNet and
DeepConvNet) were trained and compared using an openly available dataset. The
dataset contained EEG signals from 55 subjects who conducted a left- vs.
right-hand motor imagery task. To evaluate the performance of each model, a
leave-one-subject-out cross validation was used. The results of the models
differed significantly. MIN2Net was not able to differentiate right- vs.
left-hand motor imagery of new users, with a median accuracy of 51.7%. The
other two models performed better, with median accuracies of 62.5% for EEGNet
and 59.2% for DeepConvNet. These accuracies do not reach the required threshold
of 70% needed for significant control, however, they are similar to the
accuracies of these models when tested on other datasets without transfer
learning.
Related papers
- Building Math Agents with Multi-Turn Iterative Preference Learning [56.71330214021884]
This paper studies the complementary direct preference learning approach to further improve model performance.
Existing direct preference learning algorithms are originally designed for the single-turn chat task.
We introduce a multi-turn direct preference learning framework, tailored for this context.
arXiv Detail & Related papers (2024-09-04T02:41:04Z) - DataComp-LM: In search of the next generation of training sets for language models [200.5293181577585]
DataComp for Language Models (DCLM) is a testbed for controlled dataset experiments with the goal of improving language models.
We provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations.
Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters.
arXiv Detail & Related papers (2024-06-17T17:42:57Z) - Transfer Learning between Motor Imagery Datasets using Deep Learning --
Validation of Framework and Comparison of Datasets [0.0]
We present a simple deep learning-based framework commonly used in computer vision.
We demonstrate its effectiveness for cross-dataset transfer learning in mental imagery decoding tasks.
arXiv Detail & Related papers (2023-09-04T20:58:57Z) - Deep comparisons of Neural Networks from the EEGNet family [0.0]
We compared 5 well-known neural networks (Shallow ConvNet, Deep ConvNet, EEGNet, EEGNet Fusion, MI-EEGNet) using open-access databases with many subjects next to the BCI Competition 4 2a dataset.
Our metrics showed that the researchers should not avoid Shallow ConvNet and Deep ConvNet because they can perform better than the later published ones from the EEGNet family.
arXiv Detail & Related papers (2023-02-17T10:39:09Z) - A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on
Convolutional Neural Network [0.9176056742068814]
We propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface.
It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms.
TSCNN automatically learns to extract EEG features in the two paradigms in the training process.
arXiv Detail & Related papers (2022-12-10T12:34:36Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - LGD: Label-guided Self-distillation for Object Detection [59.9972914042281]
We propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation)
Our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge.
Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51% lower training cost beyond inherent student learning.
arXiv Detail & Related papers (2021-09-23T16:55:01Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - EqCo: Equivalent Rules for Self-supervised Contrastive Learning [81.45848885547754]
We propose a method to make self-supervised learning irrelevant to the number of negative samples in InfoNCE-based contrastive learning frameworks.
Inspired by the InfoMax principle, we point that the margin term in contrastive loss needs to be adaptively scaled according to the number of negative pairs.
arXiv Detail & Related papers (2020-10-05T11:39:04Z) - EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded
Motor-Imagery Brain-Machine Interfaces [15.07343602952606]
We propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters.
Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge.
arXiv Detail & Related papers (2020-05-31T21:45:45Z) - An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for
Low-Power Edge Computing [13.266626571886354]
This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI)
The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs)
The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model.
arXiv Detail & Related papers (2020-03-31T19:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.