PCF-Grasp: Converting Point Completion to Geometry Feature to Enhance 6-DoF Grasp
- URL: http://arxiv.org/abs/2504.16320v1
- Date: Tue, 22 Apr 2025 23:37:05 GMT
- Title: PCF-Grasp: Converting Point Completion to Geometry Feature to Enhance 6-DoF Grasp
- Authors: Yaofeng Cheng, Fusheng Zha, Wei Guo, Pengfei Wang, Chao Zeng, Lining Sun, Chenguang Yang,
- Abstract summary: We propose a novel 6-DoF grasping framework that converts the point completion results as object shape features to train the 6-DoF grasp network.<n>Our method achieves a 17.8% success rate higher than the state-of-the-art method in real-world experiments.
- Score: 14.909551737430473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 6-Degree of Freedom (DoF) grasp method based on point clouds has shown significant potential in enabling robots to grasp target objects. However, most existing methods are based on the point clouds (2.5D points) generated from single-view depth images. These point clouds only have one surface side of the object providing incomplete geometry information, which mislead the grasping algorithm to judge the shape of the target object, resulting in low grasping accuracy. Humans can accurately grasp objects from a single view by leveraging their geometry experience to estimate object shapes. Inspired by humans, we propose a novel 6-DoF grasping framework that converts the point completion results as object shape features to train the 6-DoF grasp network. Here, point completion can generate approximate complete points from the 2.5D points similar to the human geometry experience, and converting it as shape features is the way to utilize it to improve grasp efficiency. Furthermore, due to the gap between the network generation and actual execution, we integrate a score filter into our framework to select more executable grasp proposals for the real robot. This enables our method to maintain a high grasp quality in any camera viewpoint. Extensive experiments demonstrate that utilizing complete point features enables the generation of significantly more accurate grasp proposals and the inclusion of a score filter greatly enhances the credibility of real-world robot grasping. Our method achieves a 17.8\% success rate higher than the state-of-the-art method in real-world experiments.
Related papers
- GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects [14.598853174946656]
We present GenFlow, an approach that enables both accuracy and generalization to novel objects.
Our method predicts optical flow between the rendered image and the observed image and refines the 6D pose iteratively.
It boosts the performance by a constraint of the 3D shape and the generalizable geometric knowledge learned from an end-to-end differentiable system.
arXiv Detail & Related papers (2024-03-18T06:32:23Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - ImageManip: Image-based Robotic Manipulation with Affordance-guided Next
View Selection [10.162882793554191]
3D articulated object manipulation is essential for enabling robots to interact with their environment.
Many existing studies make use of 3D point clouds as the primary input for manipulation policies.
RGB images offer high-resolution observations using cost effective devices but lack spatial 3D geometric information.
This framework is designed to capture multiple perspectives of the target object and infer depth information to complement its geometry.
arXiv Detail & Related papers (2023-10-13T12:42:54Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - Leveraging Single-View Images for Unsupervised 3D Point Cloud Completion [53.93172686610741]
Cross-PCC is an unsupervised point cloud completion method without requiring any 3D complete point clouds.
To take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features.
Our method even achieves comparable performance to some supervised methods.
arXiv Detail & Related papers (2022-12-01T15:11:21Z) - Shape Completion with Points in the Shadow [13.608498759468024]
Single-view point cloud completion aims to recover the full geometry of an object based on only limited observation.
Inspired by the classic shadow volume technique in computer graphics, we propose a new method to reduce the solution space effectively.
arXiv Detail & Related papers (2022-09-17T14:58:56Z) - FS6D: Few-Shot 6D Pose Estimation of Novel Objects [116.34922994123973]
6D object pose estimation networks are limited in their capability to scale to large numbers of object instances.
In this work, we study a new open set problem; the few-shot 6D object poses estimation: estimating the 6D pose of an unknown object by a few support views without extra training.
arXiv Detail & Related papers (2022-03-28T10:31:29Z) - A Dynamic Keypoints Selection Network for 6DoF Pose Estimation [0.0]
6 DoF poses estimation problem aims to estimate the rotation and translation parameters between two coordinates.
We present a novel deep neural network based on dynamic keypoints selection designed for 6DoF pose estimation from a single RGBD image.
arXiv Detail & Related papers (2021-10-24T09:58:56Z) - Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes [50.303361537562715]
We propose an end-to-end network that efficiently generates a distribution of 6-DoF parallel-jaw grasps.
By rooting the full 6-DoF grasp pose and width in the observed point cloud, we can reduce the dimensionality of our grasp representation to 4-DoF.
In a robotic grasping study of unseen objects in structured clutter we achieve over 90% success rate, cutting the failure rate in half compared to a recent state-of-the-art method.
arXiv Detail & Related papers (2021-03-25T20:33:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.