PointCFormer: a Relation-based Progressive Feature Extraction Network for Point Cloud Completion
- URL: http://arxiv.org/abs/2412.08421v2
- Date: Sat, 14 Dec 2024 15:14:24 GMT
- Title: PointCFormer: a Relation-based Progressive Feature Extraction Network for Point Cloud Completion
- Authors: Yi Zhong, Weize Quan, Dong-ming Yan, Jie Jiang, Yingmei Wei,
- Abstract summary: Point cloud completion aims to reconstruct the complete 3D shape from incomplete point clouds.
We introduce PointCFormer, a transformer framework optimized for robust global retention and precise local detail capture.
PointCFormer demonstrates state-of-the-art performance on several widely used benchmarks.
- Score: 19.503392612245474
- License:
- Abstract: Point cloud completion aims to reconstruct the complete 3D shape from incomplete point clouds, and it is crucial for tasks such as 3D object detection and segmentation. Despite the continuous advances in point cloud analysis techniques, feature extraction methods are still confronted with apparent limitations. The sparse sampling of point clouds, used as inputs in most methods, often results in a certain loss of global structure information. Meanwhile, traditional local feature extraction methods usually struggle to capture the intricate geometric details. To overcome these drawbacks, we introduce PointCFormer, a transformer framework optimized for robust global retention and precise local detail capture in point cloud completion. This framework embraces several key advantages. First, we propose a relation-based local feature extraction method to perceive local delicate geometry characteristics. This approach establishes a fine-grained relationship metric between the target point and its k-nearest neighbors, quantifying each neighboring point's contribution to the target point's local features. Secondly, we introduce a progressive feature extractor that integrates our local feature perception method with self-attention. Starting with a denser sampling of points as input, it iteratively queries long-distance global dependencies and local neighborhood relationships. This extractor maintains enhanced global structure and refined local details, without generating substantial computational overhead. Additionally, we develop a correction module after generating point proxies in the latent space to reintroduce denser information from the input points, enhancing the representation capability of the point proxies. PointCFormer demonstrates state-of-the-art performance on several widely used benchmarks. Our code is available at https://github.com/Zyyyyy0926/PointCFormer_Plus_Pytorch.
Related papers
- Point Tree Transformer for Point Cloud Registration [33.00645881490638]
Point cloud registration is a fundamental task in the fields of computer vision and robotics.
We propose a novel transformer-based approach for point cloud registration that efficiently extracts comprehensive local and global features.
Our method achieves superior performance over the state-of-the-art methods.
arXiv Detail & Related papers (2024-06-25T13:14:26Z) - PotholeGuard: A Pothole Detection Approach by Point Cloud Semantic
Segmentation [0.0]
3D Semantic Pothole research often overlooks point cloud sparsity, leading to suboptimal local feature capture and segmentation accuracy.
Our model efficiently identifies hidden features and uses a feedback mechanism to enhance local characteristics.
Our approach offers a promising solution for robust and accurate 3D pothole segmentation, with applications in road maintenance and safety.
arXiv Detail & Related papers (2023-11-05T12:57:05Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Point cloud completion on structured feature map with feedback network [28.710494879042002]
We propose FSNet, a feature structuring module that can adaptively aggregate point-wise features into a 2D structured feature map.
A 2D convolutional neural network is adopted to decode feature maps from FSNet into a coarse and complete point cloud.
A point cloud upsampling network is used to generate dense point cloud from the partial input and the coarse intermediate output.
arXiv Detail & Related papers (2022-02-17T10:59:40Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z) - SK-Net: Deep Learning on Point Cloud via End-to-end Discovery of Spatial
Keypoints [7.223394571022494]
This paper presents an end-to-end framework, SK-Net, to jointly optimize the inference of spatial keypoint with the learning of feature representation of a point cloud.
Our proposed method performs better than or comparable with the state-of-the-art approaches in point cloud tasks.
arXiv Detail & Related papers (2020-03-31T08:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.