Design of an Affordable Prosthetic Arm Equipped with Deep Learning
Vision-Based Manipulation
- URL: http://arxiv.org/abs/2103.02099v1
- Date: Wed, 3 Mar 2021 00:35:06 GMT
- Title: Design of an Affordable Prosthetic Arm Equipped with Deep Learning
Vision-Based Manipulation
- Authors: Alishba Imran, William Escobar, Freidoon Barez
- Abstract summary: This paper lays the complete outline of the design process of an affordable and easily accessible novel prosthetic arm.
The 3D printed prosthetic arm is equipped with a depth camera and closed-loop off-policy deep learning algorithm to help form grasps to the object in view.
We were able to achieve a 78% grasp success rate on previously unseen objects and generalize across multiple objects for manipulation tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many amputees throughout the world are left with limited options to
personally own a prosthetic arm due to the expensive cost, mechanical system
complexity, and lack of availability. The three main control methods of
prosthetic hands are: (1) body-powered control, (2) extrinsic mechanical
control, and (3) myoelectric control. These methods can perform well under a
controlled situation but will often break down in clinical and everyday use due
to poor robustness, weak adaptability, long-term training, and heavy mental
burden during use. This paper lays the complete outline of the design process
of an affordable and easily accessible novel prosthetic arm that reduces the
cost of prosthetics from $10,000 to $700 on average. The 3D printed prosthetic
arm is equipped with a depth camera and closed-loop off-policy deep learning
algorithm to help form grasps to the object in view. Current work in
reinforcement learning masters only individual skills and is heavily focused on
parallel jaw grippers for in-hand manipulation. In order to create
generalization, which better performs real-world manipulation, the focus is
specifically on using the general framework of Markov Decision Process (MDP)
through scalable learning with off-policy algorithms such as deep deterministic
policy gradient (DDPG) and to study this question in the context of grasping a
prosthetic arm. We were able to achieve a 78% grasp success rate on previously
unseen objects and generalize across multiple objects for manipulation tasks.
This work will make prosthetics cheaper, easier to use and accessible globally
for amputees. Future work includes applying similar approaches to other medical
assistive devices where a human is interacting with a machine to complete a
task.
Related papers
- AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton [0.393259574660092]
This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements.
The design goal was to create an accessible tool to help with a simple interface requiring no training.
arXiv Detail & Related papers (2024-08-09T04:47:37Z) - MindArm: Mechanized Intelligent Non-Invasive Neuro-Driven Prosthetic Arm System [5.528262076322921]
MindArm employs a deep neural network (DNN) to translate brain signals, captured by low-cost surface electroencephalogram (EEG) electrodes, into prosthetic arm movements.
The system costs approximately $500-550, including $400 for the EEG headset and $100-150 for motors, 3D printing, and assembly.
arXiv Detail & Related papers (2024-03-29T06:09:24Z) - Twisting Lids Off with Two Hands [82.21668778600414]
We show how policies trained in simulation can be effectively and efficiently transferred to the real world.
Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands.
This is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Learning to Design and Use Tools for Robotic Manipulation [21.18538869008642]
Recent techniques for jointly optimizing morphology and control via deep learning are effective at designing locomotion agents.
We propose learning a designer policy, rather than a single design.
We show that this framework is more sample efficient than prior methods in multi-goal or multi-variant settings.
arXiv Detail & Related papers (2023-11-01T18:00:10Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - DexArt: Benchmarking Generalizable Dexterous Manipulation with
Articulated Objects [8.195608430584073]
We propose a new benchmark called DexArt, which involves Dexterous manipulation with Articulated objects in a physical simulator.
Our main focus is to evaluate the generalizability of the learned policy on unseen articulated objects.
We use Reinforcement Learning with 3D representation learning to achieve generalization.
arXiv Detail & Related papers (2023-05-09T18:30:58Z) - Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware [132.39281056124312]
Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots.
We present a low-cost system that performs end-to-end imitation learning directly from real demonstrations.
We develop a simple yet novel algorithm, Action Chunking with Transformers, which learns a generative model over action sequences.
arXiv Detail & Related papers (2023-04-23T19:10:53Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.