FunGrasp: Functional Grasping for Diverse Dexterous Hands
- URL: http://arxiv.org/abs/2411.16755v1
- Date: Sun, 24 Nov 2024 07:30:54 GMT
- Title: FunGrasp: Functional Grasping for Diverse Dexterous Hands
- Authors: Linyi Huang, Hui Zhang, Zijian Wu, Sammy Christen, Jie Song,
- Abstract summary: We introduce FunGrasp, a system that enables functional dexterous grasping across various robot hands.
To achieve robust sim-to-real transfer, we employ several techniques including privileged learning, system identification, domain randomization, and gravity compensation.
- Score: 8.316017819784603
- License:
- Abstract: Functional grasping is essential for humans to perform specific tasks, such as grasping scissors by the finger holes to cut materials or by the blade to safely hand them over. Enabling dexterous robot hands with functional grasping capabilities is crucial for their deployment to accomplish diverse real-world tasks. Recent research in dexterous grasping, however, often focuses on power grasps while overlooking task- and object-specific functional grasping poses. In this paper, we introduce FunGrasp, a system that enables functional dexterous grasping across various robot hands and performs one-shot transfer to unseen objects. Given a single RGBD image of functional human grasping, our system estimates the hand pose and transfers it to different robotic hands via a human-to-robot (H2R) grasp retargeting module. Guided by the retargeted grasping poses, a policy is trained through reinforcement learning in simulation for dynamic grasping control. To achieve robust sim-to-real transfer, we employ several techniques including privileged learning, system identification, domain randomization, and gravity compensation. In our experiments, we demonstrate that our system enables diverse functional grasping of unseen objects using single RGBD images, and can be successfully deployed across various dexterous robot hands. The significance of the components is validated through comprehensive ablation studies. Project page: https://hly-123.github.io/FunGrasp/ .
Related papers
- Learning Granularity-Aware Affordances from Human-Object Interaction for Tool-Based Functional Grasping in Dexterous Robotics [27.124273762587848]
Affordance features of objects serve as a bridge in the functional interaction between agents and objects.
We propose a granularity-aware affordance feature extraction method for locating functional affordance areas.
We also use highly activated coarse-grained affordance features in hand-object interaction regions to predict grasp gestures.
This forms a complete dexterous robotic functional grasping framework GAAF-Dex.
arXiv Detail & Related papers (2024-06-30T07:42:57Z) - Twisting Lids Off with Two Hands [82.21668778600414]
We show how policies trained in simulation can be effectively and efficiently transferred to the real world.
Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands.
This is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration [79.76009817889397]
We propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy.
The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
arXiv Detail & Related papers (2023-09-22T13:30:26Z) - Few-Shot Learning of Force-Based Motions From Demonstration Through
Pre-training of Haptic Representation [10.553635668779911]
Existing Learning from Demonstration (LfD) approaches require a large number of costly human demonstrations.
Our proposed semi-supervised LfD approach decouples the learnt model into an haptic representation encoder and a motion generation decoder.
This enables us to pre-train the first using large amount of unsupervised data, easily accessible, while using few-shot LfD to train the second.
We validate the motion generated by our semi-supervised LfD model on the physical robot hardware using the KUKA iiwa robot arm.
arXiv Detail & Related papers (2023-09-08T23:42:59Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Design and Control of Roller Grasper V2 for In-Hand Manipulation [6.064252790182275]
We present a novel non-anthropomorphic robot grasper with the ability to manipulate objects by means of active surfaces at the fingertips.
Active surfaces are achieved by spherical rolling fingertips with two degrees of freedom (DoF)
A further DoF is in the base of each finger, allowing the fingers to grasp objects over a range of size and shapes.
arXiv Detail & Related papers (2020-04-18T00:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.