EEG Right & Left Voluntary Hand Movement-based Virtual Brain-Computer Interfacing Keyboard with Machine Learning and a Hybrid Bi-Directional LSTM-GRU Model
- URL: http://arxiv.org/abs/2409.00035v1
- Date: Sun, 18 Aug 2024 02:10:29 GMT
- Title: EEG Right & Left Voluntary Hand Movement-based Virtual Brain-Computer Interfacing Keyboard with Machine Learning and a Hybrid Bi-Directional LSTM-GRU Model
- Authors: Biplov Paneru, Bishwash Paneru, Sanjog Chhetri Sapkota,
- Abstract summary: This study focuses on EEG-based BMI for detecting keystrokes.
It aims to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study focuses on EEG-based BMI for detecting voluntary keystrokes, aiming to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes, especially for individuals with motor impairments. The methodology includes extensive segmentation, event alignment, ERP plot analysis, and signal analysis. Different deep learning models are trained to classify EEG data into three categories -- `resting state' (0), `d' key press (1), and `l' key press (2). Real-time keypress simulation based on neural activity is enabled through integration with a tkinter-based graphical user interface. Feature engineering utilized ERP windows, and the SVC model achieved 90.42% accuracy in event classification. Additionally, deep learning models -- MLP (89% accuracy), Catboost (87.39% accuracy), KNN (72.59%), Gaussian Naive Bayes (79.21%), Logistic Regression (90.81% accuracy), and a novel Bi-Directional LSTM-GRU hybrid model (89% accuracy) -- were developed for BCI keyboard simulation. Finally, a GUI was created to predict and simulate keystrokes using the trained MLP model.
Related papers
- CEReBrO: Compact Encoder for Representations of Brain Oscillations Using Efficient Alternating Attention [53.539020807256904]
We introduce a Compact for Representations of Brain Oscillations using alternating attention (CEReBrO)
Our tokenization scheme represents EEG signals at a per-channel patch.
We propose an alternating attention mechanism that jointly models intra-channel temporal dynamics and inter-channel spatial correlations, achieving 2x speed improvement with 6x less memory required compared to standard self-attention.
arXiv Detail & Related papers (2025-01-18T21:44:38Z) - Hybrid Quantum Deep Learning Model for Emotion Detection using raw EEG Signal Analysis [0.0]
This work presents a hybrid quantum deep learning technique for emotion recognition.
Conventional EEG-based emotion recognition techniques are limited by noise and high-dimensional data complexity.
The model will be extended for real-time applications and multi-class categorization in future study.
arXiv Detail & Related papers (2024-11-19T17:44:04Z) - emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography [47.160223334501126]
emg2qwerty is a large-scale dataset of non-invasive electromyographic signals recorded at the wrists while touch typing on a QWERTY keyboard.
With 1,135 sessions spanning 108 users and 346 hours of recording, this is the largest such public dataset to date.
We show strong baseline performance on predicting key-presses using sEMG signals alone.
arXiv Detail & Related papers (2024-10-26T05:18:48Z) - EEG Emotion Copilot: Optimizing Lightweight LLMs for Emotional EEG Interpretation with Assisted Medical Record Generation [12.707059419820848]
This paper presents the EEG Emotion Copilot, which first recognizes emotional states directly from EEG signals.
It then generates personalized diagnostic and treatment suggestions, and finally supports the automation of assisted electronic medical records.
The proposed copilot is expected to advance the application of affective computing in the medical domain.
arXiv Detail & Related papers (2024-09-30T19:15:05Z) - On-device Learning of EEGNet-based Network For Wearable Motor Imagery Brain-Computer Interface [2.1710886744493263]
This paper implements a lightweight and efficient on-device learning engine for wearable motor imagery recognition.
We demonstrate a remarkable accuracy gain of up to 7.31% with respect to the baseline with a memory footprint of 15.6 KByte.
Our tailored approach exhibits inference time of 14.9 ms and 0.76 mJ per single inference and 20 us and 0.83 uJ per single update during online training.
arXiv Detail & Related papers (2024-08-25T08:23:51Z) - Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder [69.7813498468116]
We propose Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text.
We also develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations) to decode text from EEG sequences.
arXiv Detail & Related papers (2024-02-27T11:45:21Z) - A Convolutional Spiking Network for Gesture Recognition in
Brain-Computer Interfaces [0.8122270502556371]
We propose a simple yet efficient machine learning-based approach for the exemplary problem of hand gesture classification based on brain signals.
We demonstrate that this approach generalizes to different subjects with both EEG and ECoG data and achieves superior accuracy in the range of 92.74-97.07%.
arXiv Detail & Related papers (2023-04-21T16:23:40Z) - A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on
Convolutional Neural Network [0.9176056742068814]
We propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface.
It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms.
TSCNN automatically learns to extract EEG features in the two paradigms in the training process.
arXiv Detail & Related papers (2022-12-10T12:34:36Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies
on Signal Sensing Technologies and Computational Intelligence Approaches and
their Applications [65.32004302942218]
Brain-Computer Interface (BCI) is a powerful communication tool between users and systems.
Recent technological advances have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.
arXiv Detail & Related papers (2020-01-28T10:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.