Hybrid Physical Metric For 6-DoF Grasp Pose Detection
- URL: http://arxiv.org/abs/2206.11141v1
- Date: Wed, 22 Jun 2022 14:35:48 GMT
- Title: Hybrid Physical Metric For 6-DoF Grasp Pose Detection
- Authors: Yuhao Lu, Beixing Deng, Zhenyu Wang, Peiyuan Zhi, Yali Li, Shengjin
Wang
- Abstract summary: We propose a hybrid physical metric to generate elaborate confidence scores for 6-DoF grasp pose detection.
To learn the new confidence scores effectively, we design a multi-resolution network called Flatness Gravity Collision GraspNet.
Our method achieves 90.5% success rate in real-world cluttered scenes.
- Score: 46.84694505427047
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: 6-DoF grasp pose detection of multi-grasp and multi-object is a challenge
task in the field of intelligent robot. To imitate human reasoning ability for
grasping objects, data driven methods are widely studied. With the introduction
of large-scale datasets, we discover that a single physical metric usually
generates several discrete levels of grasp confidence scores, which cannot
finely distinguish millions of grasp poses and leads to inaccurate prediction
results. In this paper, we propose a hybrid physical metric to solve this
evaluation insufficiency. First, we define a novel metric is based on the
force-closure metric, supplemented by the measurement of the object flatness,
gravity and collision. Second, we leverage this hybrid physical metric to
generate elaborate confidence scores. Third, to learn the new confidence scores
effectively, we design a multi-resolution network called Flatness Gravity
Collision GraspNet (FGC-GraspNet). FGC-GraspNet proposes a multi-resolution
features learning architecture for multiple tasks and introduces a new joint
loss function that enhances the average precision of the grasp detection. The
network evaluation and adequate real robot experiments demonstrate the
effectiveness of our hybrid physical metric and FGC-GraspNet. Our method
achieves 90.5\% success rate in real-world cluttered scenes. Our code is
available at https://github.com/luyh20/FGC-GraspNet.
Related papers
- Graspness Discovery in Clutters for Fast and Accurate Grasp Detection [57.81325062171676]
"graspness" is a quality based on geometry cues that distinguishes graspable areas in cluttered scenes.
We develop a neural network named cascaded graspness model to approximate the searching process.
Experiments on a large-scale benchmark, GraspNet-1Billion, show that our method outperforms previous arts by a large margin.
arXiv Detail & Related papers (2024-06-17T02:06:47Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Probabilistic MIMO U-Net: Efficient and Accurate Uncertainty Estimation
for Pixel-wise Regression [1.4528189330418977]
Uncertainty estimation in machine learning is paramount for enhancing the reliability and interpretability of predictive models.
We present an adaptation of the Multiple-Input Multiple-Output (MIMO) framework for pixel-wise regression tasks.
arXiv Detail & Related papers (2023-08-14T22:08:28Z) - DMFC-GraspNet: Differentiable Multi-Fingered Robotic Grasp Generation in
Cluttered Scenes [22.835683657191936]
Multi-fingered robotic grasping can potentially perform complex object manipulation.
Current techniques for multi-fingered robotic grasping frequently predict only a single grasp for each inference time.
This paper proposes a differentiable multi-fingered grasp generation network (DMFC-GraspNet) with three main contributions to address this challenge.
arXiv Detail & Related papers (2023-08-01T11:21:07Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Learning suction graspability considering grasp quality and robot
reachability for bin-picking [4.317666242093779]
We propose an intuitive geometric analytic-based grasp quality evaluation metric.
We further incorporate a reachability evaluation metric.
Experiment results show that our intuitive grasp quality evaluation metric is competitive with a physically-inspired metric.
arXiv Detail & Related papers (2021-11-04T00:55:42Z) - Multi-FinGAN: Generative Coarse-To-Fine Sampling of Multi-Finger Grasps [46.316638161863025]
We present Multi-FinGAN, a fast generative multi-finger grasp sampling method that synthesizes high quality grasps directly from RGB-D images in about a second.
We experimentally validate and benchmark our method against a standard grasp-sampling method on 790 grasps in simulation and 20 grasps on a real Franka Emika Panda.
Remarkably, our approach is up to 20-30 times faster than the baseline, a significant improvement that opens the door to feedback-based grasp re-planning and task informative grasping.
arXiv Detail & Related papers (2020-12-17T16:08:18Z) - Fast Uncertainty Quantification for Deep Object Pose Estimation [91.09217713805337]
Deep learning-based object pose estimators are often unreliable and overconfident.
In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation.
arXiv Detail & Related papers (2020-11-16T06:51:55Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.