REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter
- URL: http://arxiv.org/abs/2104.14118v1
- Date: Thu, 29 Apr 2021 05:31:21 GMT
- Title: REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter
- Authors: Hanbo Zhang, Deyu Yang, Han Wang, Binglei Zhao, Xuguang Lan, Nanning
Zheng
- Abstract summary: We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
- Score: 52.117388513480435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the impressive progress achieved in robust grasp detection, robots
are not skilled in sophisticated grasping tasks (e.g. search and grasp a
specific object in clutter). Such tasks involve not only grasping, but
comprehensive perception of the visual world (e.g. the relationship between
objects). Recently, the advanced deep learning techniques provide a promising
way for understanding the high-level visual concepts. It encourages robotic
researchers to explore solutions for such hard and complicated fields. However,
deep learning usually means data-hungry. The lack of data severely limits the
performance of deep-learning-based algorithms. In this paper, we present a new
dataset named \regrad to sustain the modeling of relationships among objects
and grasps. We collect the annotations of object poses, segmentations, grasps,
and relationships in each image for comprehensive perception of grasping. Our
dataset is collected in both forms of 2D images and 3D point clouds. Moreover,
since all the data are generated automatically, users are free to import their
own object models for the generation of as many data as they want. We have
released our dataset and codes. A video that demonstrates the process of data
generation is also available.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - A Review of Deep Learning Techniques for Markerless Human Motion on
Synthetic Datasets [0.0]
Estimating human posture has recently gained increasing attention in the computer vision community.
We present a model that can predict the skeleton of an animation based solely on 2D images.
The implementation process uses DeepLabCut on its own dataset to perform many necessary steps.
arXiv Detail & Related papers (2022-01-07T15:42:50Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Same Object, Different Grasps: Data and Semantic Knowledge for
Task-Oriented Grasping [40.95315009714416]
The TaskGrasp dataset is more diverse both in terms of objects and tasks, and an order of magnitude larger than previous datasets.
We present the GCNGrasp framework which uses the semantic knowledge of objects and tasks encoded in a knowledge graph to generalize to new object instances, classes and even new tasks.
We demonstrate that our dataset and model are applicable for the real world by executing task-oriented grasps on a real robot on unknown objects.
arXiv Detail & Related papers (2020-11-12T15:08:15Z) - Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models [0.0]
At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
arXiv Detail & Related papers (2020-08-21T18:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.