DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness
- URL: http://arxiv.org/abs/2503.08257v2
- Date: Sun, 16 Mar 2025 13:05:46 GMT
- Title: DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness
- Authors: Yiming Zhong, Qi Jiang, Jingyi Yu, Yuexin Ma,
- Abstract summary: A dexterous hand capable of grasping any object is essential for the development of general-purpose embodied robots.<n>We introduce DexGrasp Anything, a method that integrates physical constraints into the training and sampling phases of a diffusion-based generative model.<n>We present a new dexterous grasping dataset containing over 3.4 million diverse grasping poses for more than 15k different objects.
- Score: 38.310226324389596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A dexterous hand capable of grasping any object is essential for the development of general-purpose embodied intelligent robots. However, due to the high degree of freedom in dexterous hands and the vast diversity of objects, generating high-quality, usable grasping poses in a robust manner is a significant challenge. In this paper, we introduce DexGrasp Anything, a method that effectively integrates physical constraints into both the training and sampling phases of a diffusion-based generative model, achieving state-of-the-art performance across nearly all open datasets. Additionally, we present a new dexterous grasping dataset containing over 3.4 million diverse grasping poses for more than 15k different objects, demonstrating its potential to advance universal dexterous grasping. The code of our method and our dataset will be publicly released soon.
Related papers
- AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems [88.05152114775498]
AgiBot World is a large-scale platform comprising over 1 million trajectories across 217 tasks in five deployment scenarios.
AgiBot World guarantees high-quality and diverse data distribution.
GO-1 exhibits exceptional capability in real-world dexterous and long-horizon tasks.
arXiv Detail & Related papers (2025-03-09T15:40:29Z) - DexterityGen: Foundation Controller for Unprecedented Dexterity [67.15251368211361]
Teaching robots dexterous manipulation skills, such as tool use, presents a significant challenge.<n>Current approaches can be broadly categorized into two strategies: human teleoperation (for imitation learning) and sim-to-real reinforcement learning.<n>We introduce DexterityGen, which uses RL to pretrain large-scale dexterous motion primitives, such as in-hand rotation or translation.<n>In the real world, we use human teleoperation as a prompt to the controller to produce highly dexterous behavior.
arXiv Detail & Related papers (2025-02-06T18:49:35Z) - Decomposed Vector-Quantized Variational Autoencoder for Human Grasp Generation [27.206656215734295]
We propose a novel Decomposed Vector-Quantized Variational Autoencoder (DVQ-VAE) to generate realistic human grasps.
Part-aware decomposed architecture facilitates more precise management of the interaction between each component of hand and object.
Our model achieved about 14.1% relative improvement in the quality index compared to the state-of-the-art methods in four widely-adopted benchmarks.
arXiv Detail & Related papers (2024-07-19T06:41:16Z) - GraspXL: Generating Grasping Motions for Diverse Objects at Scale [30.104108863264706]
We unify the generation of hand-object grasping motions across multiple motion objectives in a policy learning framework GraspXL.
Our policy trained with 58 objects can robustly synthesize diverse grasping motions for more than 500k unseen objects with a success rate of 82.2%.
Our framework can be deployed to different dexterous hands and work with reconstructed or generated objects.
arXiv Detail & Related papers (2024-03-28T17:57:27Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - RealDex: Towards Human-like Grasping for Robotic Dexterous Hand [64.33746404551343]
We introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns.<n>RealDex holds immense promise in advancing humanoid robot for automated perception, cognition, and manipulation in real-world scenarios.
arXiv Detail & Related papers (2024-02-21T14:59:46Z) - UGG: Unified Generative Grasping [41.201337177738075]
Generation-based methods that generate grasping postures conditioned on the object can often produce diverse grasping.
We introduce a unified diffusion-based dexterous grasp generation model, dubbed the name UGG.
Our model achieves state-of-the-art dexterous grasping on the large-scale DexGraspNet dataset.
arXiv Detail & Related papers (2023-11-28T16:20:33Z) - DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General
Objects Based on Simulation [10.783992625475081]
We present a large-scale simulated dataset, DexGraspNet, for robotic dexterous grasping.
We use ShadowHand, a dexterous gripper commonly seen in robotics, to generate 1.32 million grasps for 5355 objects.
Compared to the previous dataset generated by GraspIt!, our dataset has not only more objects and grasps, but also higher diversity and quality.
arXiv Detail & Related papers (2022-10-06T06:09:16Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.