DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General
Objects Based on Simulation
- URL: http://arxiv.org/abs/2210.02697v1
- Date: Thu, 6 Oct 2022 06:09:16 GMT
- Title: DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General
Objects Based on Simulation
- Authors: Ruicheng Wang, Jialiang Zhang, Jiayi Chen, Yinzhen Xu, Puhao Li,
Tengyu Liu, He Wang
- Abstract summary: We present a large-scale simulated dataset, DexGraspNet, for robotic dexterous grasping.
We use ShadowHand, a dexterous gripper commonly seen in robotics, to generate 1.32 million grasps for 5355 objects.
Compared to the previous dataset generated by GraspIt!, our dataset has not only more objects and grasps, but also higher diversity and quality.
- Score: 10.783992625475081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object grasping using dexterous hands is a crucial yet challenging task for
robotic dexterous manipulation. Compared with the field of object grasping with
parallel grippers, dexterous grasping is very under-explored, partially owing
to the lack of a large-scale dataset. In this work, we present a large-scale
simulated dataset, DexGraspNet, for robotic dexterous grasping, along with a
highly efficient synthesis method for diverse dexterous grasping synthesis.
Leveraging a highly accelerated differentiable force closure estimator, we, for
the first time, are able to synthesize stable and diverse grasps efficiently
and robustly. We choose ShadowHand, a dexterous gripper commonly seen in
robotics, and generated 1.32 million grasps for 5355 objects, covering more
than 133 object categories and containing more than 200 diverse grasps for each
object instance, with all grasps having been validated by the physics
simulator. Compared to the previous dataset generated by GraspIt!, our dataset
has not only more objects and grasps, but also higher diversity and quality.
Via performing cross-dataset experiments, we show that training several
algorithms of dexterous grasp synthesis on our datasets significantly
outperforms training on the previous one, demonstrating the large scale and
diversity of DexGraspNet. We will release the data and tools upon acceptance.
Related papers
- DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning [42.88605563822155]
We present a large-scale automated data generation system that synthesizes trajectories from human demonstrations for humanoid robots with dexterous hands.
We generate 21K demos across these tasks from just 60 source human demos.
We also present a real-to-sim-to-real pipeline and deploy it on a real-world humanoid can sorting task.
arXiv Detail & Related papers (2024-10-31T17:48:45Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - AGILE: Approach-based Grasp Inference Learned from Element Decomposition [2.812395851874055]
Humans can grasp objects by taking into account hand-object positioning information.
This work proposes a method to enable a robot manipulator to learn the same, grasping objects in the most optimal way.
arXiv Detail & Related papers (2024-02-02T10:47:08Z) - Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer
Learning [11.470950882435927]
We present an end-to-end robotic grasping network with a grasp.
In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%.
The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.
arXiv Detail & Related papers (2023-01-28T16:57:19Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - DA$^2$ Dataset: Toward Dexterity-Aware Dual-Arm Grasping [58.48762955493929]
DA$2$ is the first large-scale dual-arm dexterity-aware dataset for the generation of optimal bimanual grasping pairs for arbitrary large objects.
The dataset contains about 9M pairs of parallel-jaw grasps, generated from more than 6000 objects.
arXiv Detail & Related papers (2022-07-31T10:02:27Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - ACRONYM: A Large-Scale Grasp Dataset Based on Simulation [64.37675024289857]
ACRONYM is a dataset for robot grasp planning based on physics simulation.
The dataset contains 17.7M parallel-jaw grasps, spanning 8872 objects from 262 different categories.
We show the value of this large and diverse dataset by using it to train two state-of-the-art learning-based grasp planning algorithms.
arXiv Detail & Related papers (2020-11-18T23:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.