EEG-based AI-BCI Wheelchair Advancement: A Brain-Computer Interfacing Wheelchair System Using Machine Learning Mechanism with Right and Left Voluntary Hand Movement
- URL: http://arxiv.org/abs/2410.09763v1
- Date: Sun, 13 Oct 2024 07:41:37 GMT
- Title: EEG-based AI-BCI Wheelchair Advancement: A Brain-Computer Interfacing Wheelchair System Using Machine Learning Mechanism with Right and Left Voluntary Hand Movement
- Authors: Biplov Paneru, Bishwash Paneru, Khem Narayan Poudyal,
- Abstract summary: The system is designed to simulate wheelchair navigation based on voluntary right and left-hand movements.
Various machine learning models, including Support Vector Machines (SVM), XGBoost, random forest, and a Bi-directional Long Short-Term Memory (Bi-LSTM) attention-based model, were developed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an Artificial Intelligence (AI) integrated novel approach to Brain-Computer Interface (BCI)-based wheelchair development, utilizing a voluntary Right Left Hand Movement mechanism for control. The system is designed to simulate wheelchair navigation based on voluntary right and left-hand movements using electroencephalogram (EEG) data. A pre-filtered dataset, obtained from an open-source EEG repository, was segmented into arrays of 19x200 to capture the onset of hand movements. The data was acquired at a sampling frequency 200Hz in the laboratory experiment. The system integrates a Tkinter-based interface for simulating wheelchair movements, offering users a functional and intuitive control system. Various machine learning models, including Support Vector Machines (SVM), XGBoost, random forest, and a Bi-directional Long Short-Term Memory (Bi-LSTM) attention-based model, were developed. The random forest model obtained 79% accuracy. Great performance was seen on the Logistic Regression model which outperforms other models with 92% accuracy and 91% accuracy on the Multi-Layer Perceptron (MLP) model. The Bi-LSTM attention-based model achieved a mean accuracy of 86% through cross-validation, showcasing the potential of attention mechanisms in BCI applications.
Related papers
- Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration [0.0]
Human-Robot Collaboration (HRC) is vital in Industry 4.0, using sensors, digital twins, collaborative robots (cobots) and intention-recognition models to have efficient manufacturing processes.
We address concept drift by integrating Adaptive Intelligence and self-labeling to improve the resilience of intention-recognition in an HRC system.
arXiv Detail & Related papers (2024-09-30T01:25:48Z) - EEG Right & Left Voluntary Hand Movement-based Virtual Brain-Computer Interfacing Keyboard with Machine Learning and a Hybrid Bi-Directional LSTM-GRU Model [0.0]
This study focuses on EEG-based BMI for detecting keystrokes.
It aims to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes.
arXiv Detail & Related papers (2024-08-18T02:10:29Z) - Enhancing Precision in Tactile Internet-Enabled Remote Robotic Surgery: Kalman Filter Approach [0.0]
This paper presents a Kalman Filter (KF) based computationally efficient position estimation method.
The study also assume no prior knowledge of the dynamic system model of the robotic arm system.
We investigate the effectiveness of KF to determine the position of the Patient Side Manipulator (PSM) under simulated network conditions.
arXiv Detail & Related papers (2024-06-06T20:56:53Z) - Comparison of gait phase detection using traditional machine learning
and deep learning techniques [3.11526333124308]
This study proposes a few Machine Learning (ML) based models on lower-limb EMG data for human walking.
The results show up to 75% average accuracy for traditional ML models and 79% for Deep Learning (DL) model.
arXiv Detail & Related papers (2024-03-07T10:05:09Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z) - Wheelchair automation by a hybrid BCI system using SSVEP and eye blinks [1.1099588962062936]
The prototype is based on a combined mechanism of steady-state visually evoked potential and eye blinks.
The prototype can be used efficiently in a home environment without causing any discomfort to the user.
arXiv Detail & Related papers (2021-06-10T08:02:31Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.