EEG-based AI-BCI Wheelchair Advancement: A Brain-Computer Interfacing Wheelchair System Using Machine Learning Mechanism with Right and Left Voluntary Hand Movement
- URL: http://arxiv.org/abs/2410.09763v1
- Date: Sun, 13 Oct 2024 07:41:37 GMT
- Title: EEG-based AI-BCI Wheelchair Advancement: A Brain-Computer Interfacing Wheelchair System Using Machine Learning Mechanism with Right and Left Voluntary Hand Movement
- Authors: Biplov Paneru, Bishwash Paneru, Khem Narayan Poudyal,
- Abstract summary: The system is designed to simulate wheelchair navigation based on voluntary right and left-hand movements.
Various machine learning models, including Support Vector Machines (SVM), XGBoost, random forest, and a Bi-directional Long Short-Term Memory (Bi-LSTM) attention-based model, were developed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an Artificial Intelligence (AI) integrated novel approach to Brain-Computer Interface (BCI)-based wheelchair development, utilizing a voluntary Right Left Hand Movement mechanism for control. The system is designed to simulate wheelchair navigation based on voluntary right and left-hand movements using electroencephalogram (EEG) data. A pre-filtered dataset, obtained from an open-source EEG repository, was segmented into arrays of 19x200 to capture the onset of hand movements. The data was acquired at a sampling frequency 200Hz in the laboratory experiment. The system integrates a Tkinter-based interface for simulating wheelchair movements, offering users a functional and intuitive control system. Various machine learning models, including Support Vector Machines (SVM), XGBoost, random forest, and a Bi-directional Long Short-Term Memory (Bi-LSTM) attention-based model, were developed. The random forest model obtained 79% accuracy. Great performance was seen on the Logistic Regression model which outperforms other models with 92% accuracy and 91% accuracy on the Multi-Layer Perceptron (MLP) model. The Bi-LSTM attention-based model achieved a mean accuracy of 86% through cross-validation, showcasing the potential of attention mechanisms in BCI applications.
Related papers
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models [63.89598561397856]
We present a system for quadrupedal mobile manipulation in indoor environments.
It uses a front-mounted gripper for object manipulation, a low-level controller trained in simulation using egocentric depth for agile skills.
We evaluate our system in two unseen environments without any real-world data collection or training.
arXiv Detail & Related papers (2024-09-30T20:58:38Z) - Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration [0.0]
Human-Robot Collaboration (HRC) is vital in Industry 4.0, using sensors, digital twins, collaborative robots (cobots) and intention-recognition models to have efficient manufacturing processes.
We address concept drift by integrating Adaptive Intelligence and self-labeling to improve the resilience of intention-recognition in an HRC system.
arXiv Detail & Related papers (2024-09-30T01:25:48Z) - EEG Right & Left Voluntary Hand Movement-based Virtual Brain-Computer Interfacing Keyboard with Machine Learning and a Hybrid Bi-Directional LSTM-GRU Model [0.0]
This study focuses on EEG-based BMI for detecting keystrokes.
It aims to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes.
arXiv Detail & Related papers (2024-08-18T02:10:29Z) - Enhancing Precision in Tactile Internet-Enabled Remote Robotic Surgery: Kalman Filter Approach [0.0]
This paper presents a Kalman Filter (KF) based computationally efficient position estimation method.
The study also assume no prior knowledge of the dynamic system model of the robotic arm system.
We investigate the effectiveness of KF to determine the position of the Patient Side Manipulator (PSM) under simulated network conditions.
arXiv Detail & Related papers (2024-06-06T20:56:53Z) - Comparison of gait phase detection using traditional machine learning
and deep learning techniques [3.11526333124308]
This study proposes a few Machine Learning (ML) based models on lower-limb EMG data for human walking.
The results show up to 75% average accuracy for traditional ML models and 79% for Deep Learning (DL) model.
arXiv Detail & Related papers (2024-03-07T10:05:09Z) - Battle of the Backbones: A Large-Scale Comparison of Pretrained Models
across Computer Vision Tasks [139.3768582233067]
Battle of the Backbones (BoB) is a benchmarking tool for neural network based computer vision systems.
We find that vision transformers (ViTs) and self-supervised learning (SSL) are increasingly popular.
In apples-to-apples comparisons on the same architectures and similarly sized pretraining datasets, we find that SSL backbones are highly competitive.
arXiv Detail & Related papers (2023-10-30T18:23:58Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Sequential Best-Arm Identification with Application to Brain-Computer
Interface [34.87975833920409]
A brain-computer interface (BCI) is a technology that enables direct communication between the brain and an external device or computer system.
An electroencephalogram (EEG) and event-related potential (ERP)-based speller system is a type of BCI that allows users to spell words without using a physical keyboard.
We propose a sequential top-two Thompson sampling (STTS) algorithm under the fixed-confidence setting and the fixed-budget setting.
arXiv Detail & Related papers (2023-05-17T18:49:44Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Hybrid Paradigm-based Brain-Computer Interface for Robotic Arm Control [0.9176056742068814]
Brain-computer interface (BCI) uses brain signals to communicate with external devices without actual control.
We propose a knowledge distillation-based framework to manipulate robotic arm through hybrid paradigm induced EEG signals for practical use.
arXiv Detail & Related papers (2022-12-14T08:13:10Z) - FingerFlex: Inferring Finger Trajectories from ECoG signals [68.8204255655161]
FingerFlex model is a convolutional encoder-decoder architecture adapted for finger movement regression on electrocorticographic (ECoG) brain data.
State-of-the-art performance was achieved on a publicly available BCI competition IV dataset 4 with a correlation coefficient between true and predicted trajectories up to 0.74.
arXiv Detail & Related papers (2022-10-23T16:26:01Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z) - Toward smart composites: small-scale, untethered prediction and control
for soft sensor/actuator systems [0.6465251961564604]
We present a suite of algorithms and tools for model-predictive control of sensor/actuator systems with embedded microcontroller units (MCU)
These MCUs can be colocated with sensors and actuators, enabling a new class of smart composites capable of autonomous behavior.
Online Newton-Raphson optimization solves for the control input.
arXiv Detail & Related papers (2022-05-22T22:19:09Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Wheelchair automation by a hybrid BCI system using SSVEP and eye blinks [1.1099588962062936]
The prototype is based on a combined mechanism of steady-state visually evoked potential and eye blinks.
The prototype can be used efficiently in a home environment without causing any discomfort to the user.
arXiv Detail & Related papers (2021-06-10T08:02:31Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - BeCAPTCHA-Mouse: Synthetic Mouse Trajectories and Improved Bot Detection [78.11535724645702]
We present BeCAPTCHA-Mouse, a bot detector based on a neuromotor model of mouse dynamics.
BeCAPTCHA-Mouse is able to detect bot trajectories of high realism with 93% of accuracy in average using only one mouse trajectory.
arXiv Detail & Related papers (2020-05-02T17:40:49Z) - Brain-based control of car infotainment [0.0]
We present a custom portable EEG-based Brain-Computer Interface (BCI) that exploits Event-Related Potentials (ERPs) induced with an oddball experimental paradigm to control the infotainment menu of a car.
Subject-specific models were trained with different machine learning approaches to classify EEG responses to target and non-target stimuli.
No statistical differences were observed between the CAs for the in-lab and in-car training sets, nor between the EEG responses in these conditions.
arXiv Detail & Related papers (2020-04-24T20:32:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.