SeedFormer: Patch Seeds based Point Cloud Completion with Upsample
Transformer
- URL: http://arxiv.org/abs/2207.10315v1
- Date: Thu, 21 Jul 2022 06:15:59 GMT
- Title: SeedFormer: Patch Seeds based Point Cloud Completion with Upsample
Transformer
- Authors: Haoran Zhou, Yun Cao, Wenqing Chu, Junwei Zhu, Tong Lu, Ying Tai and
Chengjie Wang
- Abstract summary: We propose a novel SeedFormer to improve the ability of detail preservation and recovery in point cloud completion.
We introduce a new shape representation, namely Patch Seeds, which not only captures general structures from partial inputs but also preserves regional information of local patterns.
Our method outperforms state-of-the-art completion networks on several benchmark datasets.
- Score: 46.800630776714016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud completion has become increasingly popular among generation tasks
of 3D point clouds, as it is a challenging yet indispensable problem to recover
the complete shape of a 3D object from its partial observation. In this paper,
we propose a novel SeedFormer to improve the ability of detail preservation and
recovery in point cloud completion. Unlike previous methods based on a global
feature vector, we introduce a new shape representation, namely Patch Seeds,
which not only captures general structures from partial inputs but also
preserves regional information of local patterns. Then, by integrating seed
features into the generation process, we can recover faithful details for
complete point clouds in a coarse-to-fine manner. Moreover, we devise an
Upsample Transformer by extending the transformer structure into basic
operations of point generators, which effectively incorporates spatial and
semantic relationships between neighboring points. Qualitative and quantitative
evaluations demonstrate that our method outperforms state-of-the-art completion
networks on several benchmark datasets. Our code is available at
https://github.com/hrzhou2/seedformer.
Related papers
- 3DMambaComplete: Exploring Structured State Space Model for Point Cloud Completion [19.60626235337542]
3DMambaComplete is a point cloud completion network built on the novel Mamba framework.
It encodes point cloud features using Mamba's selection mechanism and predicts a set of Hyperpoints.
A deformation method transforms the 2D mesh representation of HyperPoints into a fine-grained 3D structure for point cloud reconstruction.
arXiv Detail & Related papers (2024-04-10T15:45:03Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - CpT: Convolutional Point Transformer for 3D Point Cloud Processing [10.389972581905]
We present CpT: Convolutional point Transformer - a novel deep learning architecture for dealing with the unstructured nature of 3D point cloud data.
CpT is an improvement over existing attention-based Convolutions Neural Networks as well as previous 3D point cloud processing transformers.
Our model can serve as an effective backbone for various point cloud processing tasks when compared to the existing state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-21T17:45:55Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z) - Global Context Aware Convolutions for 3D Point Cloud Understanding [32.953907994511376]
We propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution.
A convolution can then be performed to transform the points and anchor features into final rotation-invariant features.
arXiv Detail & Related papers (2020-08-07T04:33:27Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.