DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
- URL: http://arxiv.org/abs/2310.08809v2
- Date: Sun, 18 Aug 2024 12:22:38 GMT
- Title: DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
- Authors: Fengbo Lan, Shengjie Wang, Yunzhe Zhang, Haotian Xu, Oluwatosin Oseni, Ziye Zhang, Yang Gao, Tao Zhang,
- Abstract summary: We propose a Learning-based framework for Throwing-Catching tasks using dexterous hands.
Our method achieves a 73% success rate across 45 scenarios (diverse hand poses and objects)
In tasks where the object in hand faces sideways, an extremely unstable scenario due to the lack of support from the palm, our method still achieves a success rate of over 60%.
- Score: 14.712280514097912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving human-like dexterous manipulation remains a crucial area of research in robotics. Current research focuses on improving the success rate of pick-and-place tasks. Compared with pick-and-place, throwing-catching behavior has the potential to increase the speed of transporting objects to their destination. However, dynamic dexterous manipulation poses a major challenge for stable control due to a large number of dynamic contacts. In this paper, we propose a Learning-based framework for Throwing-Catching tasks using dexterous hands (LTC). Our method, LTC, achieves a 73\% success rate across 45 scenarios (diverse hand poses and objects), and the learned policies demonstrate strong zero-shot transfer performance on unseen objects. Additionally, in tasks where the object in hand faces sideways, an extremely unstable scenario due to the lack of support from the palm, all baselines fail, while our method still achieves a success rate of over 60\%.
Related papers
- Dynamic object goal pushing with mobile manipulators through model-free constrained reinforcement learning [9.305146484955296]
We develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions.
The proposed controller for the robotic arm and the mobile base motion is trained using a constrained Reinforcement Learning (RL) formulation.
The learned policy achieves a success rate of 91.35% in simulation and at least 80% on hardware in challenging scenarios.
arXiv Detail & Related papers (2025-02-03T17:28:35Z) - Local Policies Enable Zero-shot Long-horizon Manipulation [80.1161776000682]
We introduce ManipGen, which leverages a new class of policies for sim2real transfer: local policies.
ManipGen outperforms SOTA approaches such as SayCan, OpenVLA, LLMTrajGen and VoxPoser across 50 real-world manipulation tasks by 36%, 76%, 62% and 60% respectively.
arXiv Detail & Related papers (2024-10-29T17:59:55Z) - Single-Shot Learning of Stable Dynamical Systems for Long-Horizon Manipulation Tasks [48.54757719504994]
This paper focuses on improving task success rates while reducing the amount of training data needed.
Our approach introduces a novel method that segments long-horizon demonstrations into discrete steps defined by waypoints and subgoals.
We validate our approach through both simulation and real-world experiments, demonstrating effective transfer from simulation to physical robotic platforms.
arXiv Detail & Related papers (2024-10-01T19:49:56Z) - Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - GraspXL: Generating Grasping Motions for Diverse Objects at Scale [30.104108863264706]
We unify the generation of hand-object grasping motions across multiple motion objectives in a policy learning framework GraspXL.
Our policy trained with 58 objects can robustly synthesize diverse grasping motions for more than 500k unseen objects with a success rate of 82.2%.
Our framework can be deployed to different dexterous hands and work with reconstructed or generated objects.
arXiv Detail & Related papers (2024-03-28T17:57:27Z) - Physically Plausible Full-Body Hand-Object Interaction Synthesis [32.83908152822006]
We propose a physics-based method for synthesizing dexterous hand-object interactions in a full-body setting.
Existing methods often focus on isolated segments of the interaction process and rely on data-driven techniques that may result in artifacts.
arXiv Detail & Related papers (2023-09-14T17:55:18Z) - HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation [29.01984677695523]
We introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects.
We evaluate HACMan on a 6D object pose alignment task in both simulation and in the real world.
Compared to alternative action representations, HACMan achieves a success rate more than three times higher than the best baseline.
arXiv Detail & Related papers (2023-05-06T05:55:27Z) - Interacting Hand-Object Pose Estimation via Dense Mutual Attention [97.26400229871888]
3D hand-object pose estimation is the key to the success of many computer vision applications.
We propose a novel dense mutual attention mechanism that is able to model fine-grained dependencies between the hand and the object.
Our method is able to produce physically plausible poses with high quality and real-time inference speed.
arXiv Detail & Related papers (2022-11-16T10:01:33Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.