DA$^2$ Dataset: Toward Dexterity-Aware Dual-Arm Grasping
- URL: http://arxiv.org/abs/2208.00408v1
- Date: Sun, 31 Jul 2022 10:02:27 GMT
- Title: DA$^2$ Dataset: Toward Dexterity-Aware Dual-Arm Grasping
- Authors: Guangyao Zhai, Yu Zheng, Ziwei Xu, Xin Kong, Yong Liu, Benjamin Busam,
Yi Ren, Nassir Navab, Zhengyou Zhang
- Abstract summary: DA$2$ is the first large-scale dual-arm dexterity-aware dataset for the generation of optimal bimanual grasping pairs for arbitrary large objects.
The dataset contains about 9M pairs of parallel-jaw grasps, generated from more than 6000 objects.
- Score: 58.48762955493929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce DA$^2$, the first large-scale dual-arm
dexterity-aware dataset for the generation of optimal bimanual grasping pairs
for arbitrary large objects. The dataset contains about 9M pairs of
parallel-jaw grasps, generated from more than 6000 objects and each labeled
with various grasp dexterity measures. In addition, we propose an end-to-end
dual-arm grasp evaluation model trained on the rendered scenes from this
dataset. We utilize the evaluation model as our baseline to show the value of
this novel and nontrivial dataset by both online analysis and real robot
experiments. All data and related code will be open-sourced at
https://sites.google.com/view/da2dataset.
Related papers
- EarthView: A Large Scale Remote Sensing Dataset for Self-Supervision [72.84868704100595]
This paper presents a dataset specifically designed for self-supervision on remote sensing data, intended to enhance deep learning applications on Earth monitoring tasks.
The dataset spans 15 tera pixels of global remote-sensing data, combining imagery from a diverse range of sources, including NEON, Sentinel, and a novel release of 1m spatial resolution data from Satellogic.
Accompanying the dataset is EarthMAE, a tailored Masked Autoencoder developed to tackle the distinct challenges of remote sensing data.
arXiv Detail & Related papers (2025-01-14T13:42:22Z) - AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities [5.767156832161819]
We propose AnySat, a multimodal model based on joint embedding predictive architecture (JEPA) and resolution-adaptive spatial encoders.
To demonstrate the advantages of this unified approach, we compile GeoPlex, a collection of $5$ multimodal datasets.
We then train a single powerful model on these diverse datasets simultaneously.
arXiv Detail & Related papers (2024-12-18T18:11:53Z) - Making Multi-Axis Gaussian Graphical Models Scalable to Millions of Samples and Features [0.30723404270319693]
We introduce a method that has $O(n2)$ runtime and $O(n)$ space complexity, without assuming independence.
We demonstrate that our approach can be used on unprecedentedly large datasets, such as a real-world 1,000,000-cell scRNA-seq dataset.
arXiv Detail & Related papers (2024-07-29T11:15:25Z) - Diffusion Models as Data Mining Tools [87.77999285241219]
This paper demonstrates how to use generative models trained for image synthesis as tools for visual data mining.
We show that after finetuning conditional diffusion models to synthesize images from a specific dataset, we can use these models to define a typicality measure.
This measure assesses how typical visual elements are for different data labels, such as geographic location, time stamps, semantic labels, or even the presence of a disease.
arXiv Detail & Related papers (2024-07-20T17:14:31Z) - DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of
mixture-of-datasets [34.780870585656395]
We propose dataset-Aware Mixture-of-Experts, DAMEX.
We train the experts to become an expert' of a dataset by learning to route each dataset tokens to its mapped expert.
Experiments on Universal Object-Detection Benchmark show that we outperform the existing state-of-the-art.
arXiv Detail & Related papers (2023-11-08T18:55:24Z) - DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General
Objects Based on Simulation [10.783992625475081]
We present a large-scale simulated dataset, DexGraspNet, for robotic dexterous grasping.
We use ShadowHand, a dexterous gripper commonly seen in robotics, to generate 1.32 million grasps for 5355 objects.
Compared to the previous dataset generated by GraspIt!, our dataset has not only more objects and grasps, but also higher diversity and quality.
arXiv Detail & Related papers (2022-10-06T06:09:16Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - ACRONYM: A Large-Scale Grasp Dataset Based on Simulation [64.37675024289857]
ACRONYM is a dataset for robot grasp planning based on physics simulation.
The dataset contains 17.7M parallel-jaw grasps, spanning 8872 objects from 262 different categories.
We show the value of this large and diverse dataset by using it to train two state-of-the-art learning-based grasp planning algorithms.
arXiv Detail & Related papers (2020-11-18T23:24:00Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.