3DSGrasp: 3D Shape-Completion for Robotic Grasp
- URL: http://arxiv.org/abs/2301.00866v1
- Date: Mon, 2 Jan 2023 20:23:05 GMT
- Title: 3DSGrasp: 3D Shape-Completion for Robotic Grasp
- Authors: Seyed S. Mohammadi, Nuno F. Duarte, Dimitris Dimou, Yiming Wang,
Matteo Taiana, Pietro Morerio, Atabak Dehban, Plinio Moreno, Alexandre
Bernardino, Alessio Del Bue and Jose Santos-Victor
- Abstract summary: Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available.
In practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action.
We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses.
- Score: 81.1638620745356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world robotic grasping can be done robustly if a complete 3D Point Cloud
Data (PCD) of an object is available. However, in practice, PCDs are often
incomplete when objects are viewed from few and sparse viewpoints before the
grasping action, leading to the generation of wrong or inaccurate grasp poses.
We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing
geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD
completion network is a Transformer-based encoder-decoder network with an
Offset-Attention layer. Our network is inherently invariant to the object pose
and point's permutation, which generates PCDs that are geometrically consistent
and completed properly. Experiments on a wide range of partial PCD show that
3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks
and largely improves the grasping success rate in real-world scenarios. The
code and dataset will be made available upon acceptance.
Related papers
- POC-SLT: Partial Object Completion with SDF Latent Transformers [1.5999407512883512]
3D geometric shape completion hinges on representation learning and a deep understanding of geometric data.
We propose a transformer operating on the latent space representing Signed Distance Fields (SDFs)
Instead of a monolithic volume, the SDF of an object is partitioned into smaller high-resolution patches leading to a sequence of latent codes.
arXiv Detail & Related papers (2024-11-08T09:13:20Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - Unsupervised Geodesic-preserved Generative Adversarial Networks for
Unconstrained 3D Pose Transfer [84.04540436494011]
We present an unsupervised approach to conduct the pose transfer between any arbitrated given 3D meshes.
Specifically, a novel Intrinsic-Extrinsic Preserved Generative Adrative Network (IEP-GAN) is presented for both intrinsic (i.e., shape) and extrinsic (i.e., pose) information preservation.
Our proposed model produces better results and is substantially more efficient compared to recent state-of-the-art methods.
arXiv Detail & Related papers (2021-08-17T09:08:21Z) - Generative Sparse Detection Networks for 3D Single-shot Object Detection [43.91336826079574]
3D object detection has been widely studied due to its potential applicability to many promising areas such as robotics and augmented reality.
Yet, the sparse nature of the 3D data poses unique challenges to this task.
We propose Generative Sparse Detection Network (GSDN), a fully-convolutional single-shot sparse detection network.
arXiv Detail & Related papers (2020-06-22T15:54:24Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.