A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on
Convolutional Neural Network
- URL: http://arxiv.org/abs/2212.05289v1
- Date: Sat, 10 Dec 2022 12:34:36 GMT
- Title: A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on
Convolutional Neural Network
- Authors: Wenwei Luo and Wanguang Yin and Quanying Liu and Youzhi Qu
- Abstract summary: We propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface.
It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms.
TSCNN automatically learns to extract EEG features in the two paradigms in the training process.
- Score: 0.9176056742068814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The key to electroencephalography (EEG)-based brain-computer interface (BCI)
lies in neural decoding, and its accuracy can be improved by using hybrid BCI
paradigms, that is, fusing multiple paradigms. However, hybrid BCIs usually
require separate processing processes for EEG signals in each paradigm, which
greatly reduces the efficiency of EEG feature extraction and the
generalizability of the model. Here, we propose a two-stream convolutional
neural network (TSCNN) based hybrid brain-computer interface. It combines
steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms.
TSCNN automatically learns to extract EEG features in the two paradigms in the
training process, and improves the decoding accuracy by 25.4% compared with the
MI mode, and 2.6% compared with SSVEP mode in the test data. Moreover, the
versatility of TSCNN is verified as it provides considerable performance in
both single-mode (70.2% for MI, 93.0% for SSVEP) and hybrid-mode scenarios
(95.6% for MI-SSVEP hybrid). Our work will facilitate the real-world
applications of EEG-based BCI systems.
Related papers
- Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding [2.0721229324537833]
We propose a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST)
Our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67%.
This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
arXiv Detail & Related papers (2024-09-05T05:08:43Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 3D-CLMI: A Motor Imagery EEG Classification Model via Fusion of 3D-CNN
and LSTM with Attention [0.174048653626208]
This paper proposed a model that combined a three-dimensional convolutional neural network (CNN) with a long short-term memory (LSTM) network to classify motor imagery (MI) signals.
Experimental results showed that this model achieved a classification accuracy of 92.7% and an F1-score of 0.91 on the public dataset BCI Competition IV dataset 2a.
The model greatly improved the classification accuracy of users' motor imagery intentions, giving brain-computer interfaces better application prospects in emerging fields such as autonomous vehicles and medical rehabilitation.
arXiv Detail & Related papers (2023-12-20T03:38:24Z) - EKGNet: A 10.96{\mu}W Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification [79.7946379395238]
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification.
We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that archives high accuracy with low power consumption.
arXiv Detail & Related papers (2023-10-24T02:37:49Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - EEG-BBNet: a Hybrid Framework for Brain Biometric using Graph
Connectivity [1.1498015270151059]
We present EEG-BBNet, a hybrid network which integrates convolutional neural networks (CNN) with graph convolutional neural networks (GCNN)
Our models outperform all baselines in the event-related potential (ERP) task with an average correct recognition rates up to 99.26% using intra-session data.
arXiv Detail & Related papers (2022-08-17T10:18:22Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded
Motor-Imagery Brain-Machine Interfaces [15.07343602952606]
We propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters.
Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge.
arXiv Detail & Related papers (2020-05-31T21:45:45Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.