SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping
- URL: http://arxiv.org/abs/2103.12311v1
- Date: Tue, 23 Mar 2021 05:02:52 GMT
- Title: SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping
- Authors: Hanwen Cao, Hao-Shu Fang, Wenhai Liu, Cewu Lu
- Abstract summary: We propose a new physical model to analytically evaluate seal formation and wrench resistance of a suction grasping.
A two-step methodology is adopted to generate annotations on a large-scale dataset collected in real-world cluttered scenarios.
A standard online evaluation system is proposed to evaluate suction poses in continuous operation space.
- Score: 47.221326169627666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Suction is an important solution for the longstanding robotic grasping
problem. Compared with other kinds of grasping, suction grasping is easier to
represent and often more reliable in practice. Though preferred in many
scenarios, it is not fully investigated and lacks sufficient training data and
evaluation benchmarks. To address that, firstly, we propose a new physical
model to analytically evaluate seal formation and wrench resistance of a
suction grasping, which are two key aspects of grasp success. Secondly, a
two-step methodology is adopted to generate annotations on a large-scale
dataset collected in real-world cluttered scenarios. Thirdly, a standard online
evaluation system is proposed to evaluate suction poses in continuous operation
space, which can benchmark different algorithms fairly without the need of
exhaustive labeling. Real-robot experiments are conducted to show that our
annotations align well with real world. Meanwhile, we propose a method to
predict numerous suction poses from an RGB-D image of a cluttered scene and
demonstrate our superiority against several previous methods. Result analyses
are further provided to help readers better understand the challenges in this
area. Data and source code are publicly available at www.graspnet.net.
Related papers
- Graspness Discovery in Clutters for Fast and Accurate Grasp Detection [57.81325062171676]
"graspness" is a quality based on geometry cues that distinguishes graspable areas in cluttered scenes.
We develop a neural network named cascaded graspness model to approximate the searching process.
Experiments on a large-scale benchmark, GraspNet-1Billion, show that our method outperforms previous arts by a large margin.
arXiv Detail & Related papers (2024-06-17T02:06:47Z) - Embarrassingly Simple Scribble Supervision for 3D Medical Segmentation [0.8391490466934672]
Scribble-supervised learning emerges as a possible solution to this challenge, promising a reduction in annotation efforts when creating large-scale datasets.
We propose a benchmark consisting of seven datasets covering a diverse set of anatomies and pathologies imaged with varying modalities.
Our evaluation using nnU-Net reveals that while most existing methods suffer from a lack of generalization, the proposed approach consistently delivers state-of-the-art performance.
arXiv Detail & Related papers (2024-03-19T15:41:16Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z) - Occlusion-Robust Object Pose Estimation with Holistic Representation [42.27081423489484]
State-of-the-art (SOTA) object pose estimators take a two-stage approach.
We develop a novel occlude-and-blackout batch augmentation technique.
We also develop a multi-precision supervision architecture to encourage holistic pose representation learning.
arXiv Detail & Related papers (2021-10-22T08:00:26Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - DGSAC: Density Guided Sampling and Consensus [4.808421423598809]
Kernel Residual Density is a key differentiator between inliers and outliers.
We propose two model selection algorithms, an optimal quadratic program based, and a greedy.
We evaluate our method on a wide variety of tasks like planar segmentation, motion segmentation, vanishing point estimation, plane fitting to 3D point cloud, line, and circle fitting.
arXiv Detail & Related papers (2020-06-03T17:42:53Z) - Learning multiview 3D point cloud registration [74.39499501822682]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm.
Our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly.
arXiv Detail & Related papers (2020-01-15T03:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.