Towards Biosignals-Free Autonomous Prosthetic Hand Control via Imitation Learning
- URL: http://arxiv.org/abs/2506.08795v1
- Date: Tue, 10 Jun 2025 13:44:08 GMT
- Title: Towards Biosignals-Free Autonomous Prosthetic Hand Control via Imitation Learning
- Authors: Kaijie Shi, Wanglong Lu, Hanli Zhao, Vinicius Prado da Fonseca, Ting Zou, Xianta Jiang,
- Abstract summary: This study aims to develop a fully autonomous control system for a prosthetic hand.<n>By placing the hand near an object, the system will automatically execute grasping actions with a proper grip force.<n>To release the object being grasped, just naturally place the object close to the table and the system will automatically open the hand.
- Score: 1.072044330361478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limb loss affects millions globally, impairing physical function and reducing quality of life. Most traditional surface electromyographic (sEMG) and semi-autonomous methods require users to generate myoelectric signals for each control, imposing physically and mentally taxing demands. This study aims to develop a fully autonomous control system that enables a prosthetic hand to automatically grasp and release objects of various shapes using only a camera attached to the wrist. By placing the hand near an object, the system will automatically execute grasping actions with a proper grip force in response to the hand's movements and the environment. To release the object being grasped, just naturally place the object close to the table and the system will automatically open the hand. Such a system would provide individuals with limb loss with a very easy-to-use prosthetic control interface and greatly reduce mental effort while using. To achieve this goal, we developed a teleoperation system to collect human demonstration data for training the prosthetic hand control model using imitation learning, which mimics the prosthetic hand actions from human. Through training the model using only a few objects' data from one single participant, we have shown that the imitation learning algorithm can achieve high success rates, generalizing to more individuals and unseen objects with a variation of weights. The demonstrations are available at \href{https://sites.google.com/view/autonomous-prosthetic-hand}{https://sites.google.com/view/autonomous-prosthetic-hand}
Related papers
- HannesImitation: Grasping with the Hannes Prosthetic Hand via Imitation Learning [5.122722600158078]
In robotics, imitation learning has emerged as a promising approach for learning grasping and complex manipulation tasks.<n>We present HannesImitationPolicy, an imitation learning-based method to control the Hannes prosthetic hand.<n>We leverage such data to train a single diffusion policy and deploy it on the prosthetic hand to predict the wrist orientation and hand closure for grasping.
arXiv Detail & Related papers (2025-08-01T10:09:38Z) - Feel the Force: Contact-Driven Learning from Humans [52.36160086934298]
Controlling fine-grained forces during manipulation remains a core challenge in robotics.<n>We present FeelTheForce, a robot learning system that models human tactile behavior to learn force-sensitive manipulation.<n>Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks.
arXiv Detail & Related papers (2025-06-02T17:57:52Z) - Imitation Learning for Adaptive Control of a Virtual Soft Exoglove [3.3030080038744947]
We propose a customized wearable robotic controller that is able to address specific muscle deficits and to provide compensation for hand-object manipulation tasks.<n>Video data of a same subject performing human grasping tasks is used to train a manipulation model through learning from demonstration.<n>This manipulation model is subsequently fine-tuned to perform object-specific interaction tasks.<n>The muscle forces in the musculoskeletal manipulation model are then weakened to simulate neurological motor impairments, which are later compensated by the actuation of a virtual wearable robotics glove.
arXiv Detail & Related papers (2025-05-14T03:09:21Z) - AnyDexGrasp: General Dexterous Grasping for Different Hands with Human-level Learning Efficiency [49.868970174484204]
We introduce an efficient approach for learning dexterous grasping with minimal data.<n>Our method achieves high performance with human-level learning efficiency: only hundreds of grasp attempts on 40 training objects.<n>This method demonstrates promising applications for humanoid robots, prosthetics, and other domains requiring robust, versatile robotic manipulation.
arXiv Detail & Related papers (2025-02-23T03:26:06Z) - Learning to Transfer Human Hand Skills for Robot Manipulations [12.797862020095856]
We present a method for teaching dexterous manipulation tasks to robots from human hand motion demonstrations.<n>Our approach learns a joint motion manifold that maps human hand movements, robot hand actions, and object movements in 3D, enabling us to infer one motion from others.
arXiv Detail & Related papers (2025-01-07T22:33:47Z) - AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton [0.393259574660092]
This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements.
The design goal was to create an accessible tool to help with a simple interface requiring no training.
arXiv Detail & Related papers (2024-08-09T04:47:37Z) - Naturalistic Robot Arm Trajectory Generation via Representation Learning [4.7682079066346565]
Integration of manipulator robots in household environments suggests a need for more predictable human-like robot motion.
One method of generating naturalistic motion trajectories is via imitation of human demonstrators.
This paper explores a self-supervised imitation learning method using an autoregressive neural network for an assistive drinking task.
arXiv Detail & Related papers (2023-09-14T09:26:03Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - In-Hand Object Rotation via Rapid Motor Adaptation [59.59946962428837]
We show how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips.
The controller is trained entirely in simulation on only cylindrical objects.
It can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis.
arXiv Detail & Related papers (2022-10-10T17:58:45Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from
Video [86.49357517864937]
We propose DexVIP, an approach to learn dexterous robotic grasping from human-object interaction videos.
We do this by curating grasp images from human-object interaction videos and imposing a prior over the agent's hand pose.
We demonstrate that DexVIP compares favorably to existing approaches that lack a hand pose prior or rely on specialized tele-operation equipment.
arXiv Detail & Related papers (2022-02-01T00:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.