AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton
- URL: http://arxiv.org/abs/2408.15248v1
- Date: Fri, 9 Aug 2024 04:47:37 GMT
- Title: AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton
- Authors: Md Abdul Baset Sarker, Juan Pablo Sola-thomas, Masudul H. Imtiaz,
- Abstract summary: This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements.
The design goal was to create an accessible tool to help with a simple interface requiring no training.
- Score: 0.393259574660092
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to Motor Neurone Diseases, a large population remains disabled worldwide, negatively impacting their independence and quality of life. This typically involves a weakness in the hand and forearm muscles, making it difficult to perform fine motor tasks such as writing, buttoning a shirt, or gripping objects. This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements. The design goal was to create an accessible tool to help with a simple interface requiring no training. This prototype is built on a commercially available glove where a camera and embedded processor were integrated to help open and close the hand, using air pressure, thus grabbing an object. An accelerometer is also implemented to detect the characteristic hand gesture to release the object when desired. This passive vision-based control differs from active EMG-based designs as it does not require individualized training. Continuing the research will reduce the cost, weight, and power consumption to facilitate mass implementation.
Related papers
- Vision Controlled Sensorized Prosthetic Hand [0.31666540219908274]
This paper presents a sensorized vision-enabled prosthetic hand aimed at replicating a natural hand's performance, functionality, appearance, and comfort.
The design goal was to create an accessible substitution with a user-friendly interface requiring little to no training.
arXiv Detail & Related papers (2024-06-25T14:44:04Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - In-Hand Object Rotation via Rapid Motor Adaptation [59.59946962428837]
We show how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips.
The controller is trained entirely in simulation on only cylindrical objects.
It can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis.
arXiv Detail & Related papers (2022-10-10T17:58:45Z) - A cost effective eye movement tracker based wheel chair control
algorithm for people with paraplegia [0.0]
This paper is an approach to converting obtained signals from the eye into meaningful signals by trying to control a bot that imitates a wheelchair.
The overall system is cost-effective and uses simple image processing and pattern recognition to control the bot.
An android application is developed, which could be used by the patients' aid for more refined control of the wheelchair in the actual scenario.
arXiv Detail & Related papers (2022-07-21T14:44:57Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Property-Aware Robot Object Manipulation: a Generative Approach [57.70237375696411]
In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
arXiv Detail & Related papers (2021-06-08T14:15:36Z) - Learning Visually Guided Latent Actions for Assistive Teleoperation [9.75385535829762]
We develop assistive robots that condition their latent embeddings on visual inputs.
We show that incorporating object detectors pretrained on small amounts of cheap, easy-to-collect structured data enables i) accurately recognizing the current context and ii) generalizing control embeddings to new objects and tasks.
arXiv Detail & Related papers (2021-05-02T23:58:28Z) - Design of an Affordable Prosthetic Arm Equipped with Deep Learning
Vision-Based Manipulation [0.0]
This paper lays the complete outline of the design process of an affordable and easily accessible novel prosthetic arm.
The 3D printed prosthetic arm is equipped with a depth camera and closed-loop off-policy deep learning algorithm to help form grasps to the object in view.
We were able to achieve a 78% grasp success rate on previously unseen objects and generalize across multiple objects for manipulation tasks.
arXiv Detail & Related papers (2021-03-03T00:35:06Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.