Simultaneous Semantic and Collision Learning for 6-DoF Grasp Pose
Estimation
- URL: http://arxiv.org/abs/2108.02425v1
- Date: Thu, 5 Aug 2021 07:46:48 GMT
- Title: Simultaneous Semantic and Collision Learning for 6-DoF Grasp Pose
Estimation
- Authors: Yiming Li, Tao Kong, Ruihang Chu, Yifeng Li, Peng Wang and Lei Li
- Abstract summary: We formalize the 6-DoF grasp pose estimation as a simultaneous multi-task learning problem.
In a unified framework, we jointly predict the feasible 6-DoF grasp poses, instance semantic segmentation, and collision information.
Our model is evaluated on large-scale benchmarks as well as the real robot system.
- Score: 20.11811614166135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grasping in cluttered scenes has always been a great challenge for robots,
due to the requirement of the ability to well understand the scene and object
information. Previous works usually assume that the geometry information of the
objects is available, or utilize a step-wise, multi-stage strategy to predict
the feasible 6-DoF grasp poses. In this work, we propose to formalize the 6-DoF
grasp pose estimation as a simultaneous multi-task learning problem. In a
unified framework, we jointly predict the feasible 6-DoF grasp poses, instance
semantic segmentation, and collision information. The whole framework is
jointly optimized and end-to-end differentiable. Our model is evaluated on
large-scale benchmarks as well as the real robot system. On the public dataset,
our method outperforms prior state-of-the-art methods by a large margin (+4.08
AP). We also demonstrate the implementation of our model on a real robotic
platform and show that the robot can accurately grasp target objects in
cluttered scenarios with a high success rate. Project link:
https://openbyterobotics.github.io/sscl
Related papers
- Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object Pose Estimation [68.81887041766373]
We introduce a diffusion-based paradigm for domain-generalized 9-DoF object pose estimation.
We propose an effective diffusion model to redefine 9-DoF object pose estimation from a generative perspective.
We show that our method achieves state-of-the-art domain generalization performance.
arXiv Detail & Related papers (2025-02-04T17:46:34Z) - Good Grasps Only: A data engine for self-supervised fine-tuning of pose estimation using grasp poses for verification [0.0]
We present a novel method for self-supervised fine-tuning of pose estimation.
Our approach enables the robot to automatically obtain training data without manual labeling.
Our pipeline allows the system to fine-tune while the process is running, removing the need for a learning phase.
arXiv Detail & Related papers (2024-09-17T19:26:21Z) - ICGNet: A Unified Approach for Instance-Centric Grasping [42.92991092305974]
We introduce an end-to-end architecture for object-centric grasping.
We show the effectiveness of the proposed method by extensively evaluating it against state-of-the-art methods on synthetic datasets.
arXiv Detail & Related papers (2024-01-18T12:41:41Z) - Learning to Estimate 6DoF Pose from Limited Data: A Few-Shot,
Generalizable Approach using RGB Images [60.0898989456276]
We present a new framework named Cas6D for few-shot 6DoF pose estimation that is generalizable and uses only RGB images.
To address the false positives of target object detection in the extreme few-shot setting, our framework utilizes a self-supervised pre-trained ViT to learn robust feature representations.
Experimental results on the LINEMOD and GenMOP datasets demonstrate that Cas6D outperforms state-of-the-art methods by 9.2% and 3.8% accuracy (Proj-5) under the 32-shot setting.
arXiv Detail & Related papers (2023-06-13T07:45:42Z) - Unseen Object 6D Pose Estimation: A Benchmark and Baselines [62.8809734237213]
We propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.
We collect a dataset with both real and synthetic images and up to 48 unseen objects in the test set.
By training an end-to-end 3D correspondences network, our method finds corresponding points between an unseen object and a partial view RGBD image accurately and efficiently.
arXiv Detail & Related papers (2022-06-23T16:29:53Z) - DemoGrasp: Few-Shot Learning for Robotic Grasping with Human
Demonstration [42.19014385637538]
We propose to teach a robot how to grasp an object with a simple and short human demonstration.
We first present a small sequence of RGB-D images displaying a human-object interaction.
This sequence is then leveraged to build associated hand and object meshes that represent the interaction.
arXiv Detail & Related papers (2021-12-06T08:17:12Z) - Learning Models as Functionals of Signed-Distance Fields for
Manipulation Planning [51.74463056899926]
This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent objects in the scene.
We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations.
arXiv Detail & Related papers (2021-10-02T12:36:58Z) - ZePHyR: Zero-shot Pose Hypothesis Rating [36.52070583343388]
We introduce a novel method for zero-shot object pose estimation in clutter.
Our approach uses a hypothesis generation and scoring framework, with a focus on learning a scoring function that generalizes to objects not used for training.
We demonstrate how our system can be used by quickly scanning and building a model of a novel object, which can immediately be used by our method for pose estimation.
arXiv Detail & Related papers (2021-04-28T01:48:39Z) - Attribute-Based Robotic Grasping with One-Grasp Adaptation [9.255994599301712]
We introduce an end-to-end learning method of attribute-based robotic grasping with one-grasp adaptation capability.
Our approach fuses the embeddings of a workspace image and a query text using a gated-attention mechanism and learns to predict instance grasping affordances.
Experimental results in both simulation and the real world demonstrate that our approach achieves over 80% instance grasping success rate on unknown objects.
arXiv Detail & Related papers (2021-04-06T03:40:46Z) - Object Rearrangement Using Learned Implicit Collision Functions [61.90305371998561]
We propose a learned collision model that accepts scene and query object point clouds and predicts collisions for 6DOF object poses within the scene.
We leverage the learned collision model as part of a model predictive path integral (MPPI) policy in a tabletop rearrangement task.
The learned model outperforms both traditional pipelines and learned ablations by 9.8% in accuracy on a dataset of simulated collision queries.
arXiv Detail & Related papers (2020-11-21T05:36:06Z) - CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular
Images With Self-Supervised Learning [74.53664270194643]
Modern monocular 6D pose estimation methods can only cope with a handful of object instances.
We propose a novel method for class-level monocular 6D pose estimation, coupled with metric shape retrieval.
We experimentally demonstrate that we can retrieve precise 6D poses and metric shapes from a single RGB image.
arXiv Detail & Related papers (2020-03-12T15:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.