DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
- URL: http://arxiv.org/abs/2310.08809v1
- Date: Fri, 13 Oct 2023 01:36:46 GMT
- Title: DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
- Authors: Fengbo Lan, Shengjie Wang, Yunzhe Zhang, Haotian Xu, Oluwatosin Oseni,
Yang Gao, Tao Zhang
- Abstract summary: We propose a Stability-Constrained Reinforcement Learning algorithm to learn to catch diverse objects with dexterous hands.
The SCRL algorithm outperforms baselines by a large margin, and the learned policies show strong zero-shot transfer performance on unseen objects.
- Score: 15.884572907009039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving human-like dexterous manipulation remains a crucial area of
research in robotics. Current research focuses on improving the success rate of
pick-and-place tasks. Compared with pick-and-place, throw-catching behavior has
the potential to increase picking speed without transporting objects to their
destination. However, dynamic dexterous manipulation poses a major challenge
for stable control due to a large number of dynamic contacts. In this paper, we
propose a Stability-Constrained Reinforcement Learning (SCRL) algorithm to
learn to catch diverse objects with dexterous hands. The SCRL algorithm
outperforms baselines by a large margin, and the learned policies show strong
zero-shot transfer performance on unseen objects. Remarkably, even though the
object in a hand facing sideward is extremely unstable due to the lack of
support from the palm, our method can still achieve a high level of success in
the most challenging task. Video demonstrations of learned behaviors and the
code can be found on the supplementary website.
Related papers
- Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Twisting Lids Off with Two Hands [88.20584085182857]
We show that policies trained in simulation using deep reinforcement learning can be effectively transferred to the real world.
Our findings serve as compelling evidence that deep reinforcement learning combined with sim-to-real transfer remains a promising approach for addressing manipulation problems of unprecedented complexity.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Sequential Dexterity: Chaining Dexterous Policies for Long-Horizon
Manipulation [28.37417344133933]
We present Sequential Dexterity, a general system that chains multiple dexterous policies for achieving long-horizon task goals.
The core of the system is a transition feasibility function that progressively finetunes the sub-policies for enhancing chaining success rate.
Our system demonstrates generalization capability to novel object shapes and is able to zero-shot transfer to a real-world robot equipped with a dexterous hand.
arXiv Detail & Related papers (2023-09-02T16:55:48Z) - Latent Exploration for Reinforcement Learning [87.42776741119653]
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment.
We propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network.
arXiv Detail & Related papers (2023-05-31T17:40:43Z) - DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with
Population Based Training [10.808149303943948]
We learn dexterous object manipulation using simulated one- or two-armed robots equipped with multi-fingered hand end-effectors.
We introduce a decentralized Population-Based Training (PBT) algorithm that allows us to massively amplify the exploration capabilities of deep reinforcement learning.
arXiv Detail & Related papers (2023-05-20T07:25:27Z) - HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation [29.01984677695523]
We introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects.
We evaluate HACMan on a 6D object pose alignment task in both simulation and in the real world.
Compared to alternative action representations, HACMan achieves a success rate more than three times higher than the best baseline.
arXiv Detail & Related papers (2023-05-06T05:55:27Z) - Decoupling Skill Learning from Robotic Control for Generalizable Object
Manipulation [35.34044822433743]
Recent works in robotic manipulation have shown potential for tackling a range of tasks.
We conjecture that this is due to the high-dimensional action space for joint control.
In this paper, we take an alternative approach and separate the task of learning 'what to do' from 'how to do it'
The whole-body robotic kinematic control is optimized to execute the high-dimensional joint motion to reach the goals in the workspace.
arXiv Detail & Related papers (2023-03-07T16:31:13Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.