An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair
Control
- URL: http://arxiv.org/abs/2005.02118v1
- Date: Sat, 2 May 2020 23:08:33 GMT
- Title: An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair
Control
- Authors: Mahmoud Dahmani, Muhammad E. H. Chowdhury, Amith Khandakar, Tawsifur
Rahman, Khaled Al-Jayyousi, Abdalla Hefny, and Serkan Kiranyaz
- Abstract summary: The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly.
The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly.
- Score: 3.3003775275716376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the 34 developed and 156 developing countries, there are about 132 million
disabled people who need a wheelchair constituting 1.86% of the world
population. Moreover, there are millions of people suffering from diseases
related to motor disabilities, which cause inability to produce controlled
movement in any of the limbs or even head.The paper proposes a system to aid
people with motor disabilities by restoring their ability to move effectively
and effortlessly without having to rely on others utilizing an eye-controlled
electric wheelchair. The system input was images of the users eye that were
processed to estimate the gaze direction and the wheelchair was moved
accordingly. To accomplish such a feat, four user-specific methods were
developed, implemented and tested; all of which were based on a benchmark
database created by the authors.The first three techniques were automatic,
employ correlation and were variants of template matching, while the last one
uses convolutional neural networks (CNNs). Different metrics to quantitatively
evaluate the performance of each algorithm in terms of accuracy and latency
were computed and overall comparison is presented. CNN exhibited the best
performance (i.e. 99.3% classification accuracy), and thus it was the model of
choice for the gaze estimator, which commands the wheelchair motion. The system
was evaluated carefully on 8 subjects achieving 99% accuracy in changing
illumination conditions outdoor and indoor. This required modifying a motorized
wheelchair to adapt it to the predictions output by the gaze estimation
algorithm. The wheelchair control can bypass any decision made by the gaze
estimator and immediately halt its motion with the help of an array of
proximity sensors, if the measured distance goes below a well-defined safety
margin.
Related papers
- Estimating Body and Hand Motion in an Ego-sensed World [64.08911275906544]
We present EgoAllo, a system for human motion estimation from a head-mounted device.
Using only egocentric SLAM poses and images, EgoAllo guides sampling from a conditional diffusion model to estimate 3D body pose, height, and hand parameters.
arXiv Detail & Related papers (2024-10-04T17:59:57Z) - WheelPoser: Sparse-IMU Based Body Pose Estimation for Wheelchair Users [7.5279679789210645]
We present WheelPoser, a real-time pose estimation system specifically designed for wheelchair users.
Our system uses only four strategically placed IMUs on the user's body and wheelchair, making it far more practical than prior systems using cameras and dense IMU arrays.
WheelPoser is able to track a wheelchair user's pose with a mean joint angle error of 14.30 degrees and a mean joint position error of 6.74 cm, more than three times better than similar systems using sparse IMUs.
arXiv Detail & Related papers (2024-09-13T02:41:49Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation [3.8791511769387634]
Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
arXiv Detail & Related papers (2023-01-13T14:29:44Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - A cost effective eye movement tracker based wheel chair control
algorithm for people with paraplegia [0.0]
This paper is an approach to converting obtained signals from the eye into meaningful signals by trying to control a bot that imitates a wheelchair.
The overall system is cost-effective and uses simple image processing and pattern recognition to control the bot.
An android application is developed, which could be used by the patients' aid for more refined control of the wheelchair in the actual scenario.
arXiv Detail & Related papers (2022-07-21T14:44:57Z) - Posture Prediction for Healthy Sitting using a Smart Chair [0.0]
Poor sitting habits have been identified as a risk factor to musculoskeletal disorders and lower back pain.
This study builds Machine Learning models for classifying sitting posture of a person.
arXiv Detail & Related papers (2022-01-05T20:31:28Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility
by Deep Neural Networks [19.671946716832203]
This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.
Our method recognizes sidewalk accessibilities from environmental factors, e.g. gradient, curbs, and gaps.
This paper developed and evaluated a prototype system that visualizes sidewalk accessibility information by extracting knowledge from wheelchair acceleration.
arXiv Detail & Related papers (2021-01-11T06:41:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.