A Real-Time BCI for Stroke Hand Rehabilitation Using Latent EEG Features from Healthy Subjects
- URL: http://arxiv.org/abs/2510.15890v1
- Date: Sun, 07 Sep 2025 22:19:03 GMT
- Title: A Real-Time BCI for Stroke Hand Rehabilitation Using Latent EEG Features from Healthy Subjects
- Authors: F. M. Omar, A. M. Omar, K. H. Eyada, M. Rabie, M. A. Kamel, A. M. Azab,
- Abstract summary: This study presents a real-time, portable brain-computer interface (BCI) system designed to support hand rehabilitation for stroke patients.<n>The system combines a low cost 3D-printed robotic exoskeleton with an embedded controller that converts brain signals into physical hand movements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a real-time, portable brain-computer interface (BCI) system designed to support hand rehabilitation for stroke patients. The system combines a low cost 3D-printed robotic exoskeleton with an embedded controller that converts brain signals into physical hand movements. EEG signals are recorded using a 14-channel Emotiv EPOC+ headset and processed through a supervised convolutional autoencoder (CAE) to extract meaningful latent features from single-trial data. The model is trained on publicly available EEG data from healthy individuals (WAY-EEG-GAL dataset), with electrode mapping adapted to match the Emotiv headset layout. Among several tested classifiers, Ada Boost achieved the highest accuracy (89.3%) and F1-score (0.89) in offline evaluations. The system was also tested in real time on five healthy subjects, achieving classification accuracies between 60% and 86%. The complete pipeline - EEG acquisition, signal processing, classification, and robotic control - is deployed on an NVIDIA Jetson Nano platform with a real-time graphical interface. These results demonstrate the system's potential as a low-cost, standalone solution for home-based neurorehabilitation.
Related papers
- EEG Emotion Recognition Through Deep Learning [0.0]
The model achieved a testing accuracy of 91%, outperforming traditional models such as SVM, DNN, and Logistic Regression.<n>The model allows for the reduction of the requirements of the EEG apparatus, by leveraging only 5 electrodes of the 62.<n>This advancement sets the groundwork for future exploration into mood changes induced by media content consumption.
arXiv Detail & Related papers (2025-11-19T22:14:05Z) - Graph Attention Networks for Detecting Epilepsy from EEG Signals Using Accessible Hardware in Low-Resource Settings [45.62331048595689]
Epilepsy remains under-diagnosed in low-income countries due to scarce neurologists and costly diagnostic tools.<n>We propose a graph-based deep learning framework to detect epilepsy from low-cost EEG hardware.
arXiv Detail & Related papers (2025-07-20T20:44:39Z) - BRAVE: Brain-Controlled Prosthetic Arm with Voice Integration and Embodied Learning for Enhanced Mobility [5.528262076322921]
BRAVE is a hybrid EEG and voice-controlled prosthetic system.<n>It aims to interpret EEG-driven motor intent, enabling movement control without reliance on residual muscle activity.<n>The system operates in real time, with a response latency of 150 ms.
arXiv Detail & Related papers (2025-05-23T11:44:33Z) - BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals [46.121056431476156]
This paper proposes Brain Omni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings.<n>Existing approaches typically rely on separate, modality- and dataset-specific models, which limits performance and cross-domain scalability.<n>A total of 1,997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining.
arXiv Detail & Related papers (2025-05-18T14:07:14Z) - emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography [47.160223334501126]
emg2qwerty is a large-scale dataset of non-invasive electromyographic signals recorded at the wrists while touch typing on a QWERTY keyboard.<n>With 1,135 sessions spanning 108 users and 346 hours of recording, this is the largest such public dataset to date.<n>We show strong baseline performance on predicting key-presses using sEMG signals alone.
arXiv Detail & Related papers (2024-10-26T05:18:48Z) - On-device Learning of EEGNet-based Network For Wearable Motor Imagery Brain-Computer Interface [2.1710886744493263]
This paper implements a lightweight and efficient on-device learning engine for wearable motor imagery recognition.
We demonstrate a remarkable accuracy gain of up to 7.31% with respect to the baseline with a memory footprint of 15.6 KByte.
Our tailored approach exhibits inference time of 14.9 ms and 0.76 mJ per single inference and 20 us and 0.83 uJ per single update during online training.
arXiv Detail & Related papers (2024-08-25T08:23:51Z) - EKGNet: A 10.96{\mu}W Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification [79.7946379395238]
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification.
We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that archives high accuracy with low power consumption.
arXiv Detail & Related papers (2023-10-24T02:37:49Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - FingerFlex: Inferring Finger Trajectories from ECoG signals [68.8204255655161]
FingerFlex model is a convolutional encoder-decoder architecture adapted for finger movement regression on electrocorticographic (ECoG) brain data.
State-of-the-art performance was achieved on a publicly available BCI competition IV dataset 4 with a correlation coefficient between true and predicted trajectories up to 0.74.
arXiv Detail & Related papers (2022-10-23T16:26:01Z) - Wheelchair automation by a hybrid BCI system using SSVEP and eye blinks [1.1099588962062936]
The prototype is based on a combined mechanism of steady-state visually evoked potential and eye blinks.
The prototype can be used efficiently in a home environment without causing any discomfort to the user.
arXiv Detail & Related papers (2021-06-10T08:02:31Z) - Convolutional Neural Networks for Automatic Detection of Artifacts from
Independent Components Represented in Scalp Topographies of EEG Signals [9.088303226909279]
Artifacts, due to eye movements and blink, muscular/cardiac activity and generic electrical disturbances, have to be recognized and eliminated.
ICA is effective to split the signal into independent components (ICs) whose re-projections on 2D scalp topographies (images) allow to recognize/separate artifacts and by UBS.
We present a completely automatic and effective framework for EEG artifact recognition by IC topoplots, based on 2D Convolutional Neural Networks (CNNs)
Experiments have shown an overall accuracy of above 98%, employing 1.4 sec on a standard PC to classify 32 topoplots
arXiv Detail & Related papers (2020-09-08T12:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.