Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach
- URL: http://arxiv.org/abs/2307.12049v1
- Date: Sat, 22 Jul 2023 11:10:39 GMT
- Title: Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach
- Authors: Cheng Wen, Baosheng Yu, Rao Fu, Dacheng Tao
- Abstract summary: We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
- Score: 83.05340155068721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A generative model for high-fidelity point clouds is of great importance in
synthesizing 3d environments for applications such as autonomous driving and
robotics. Despite the recent success of deep generative models for 2d images,
it is non-trivial to generate 3d point clouds without a comprehensive
understanding of both local and global geometric structures. In this paper, we
devise a new 3d point cloud generation framework using a divide-and-conquer
approach, where the whole generation process can be divided into a set of
patch-wise generation tasks. Specifically, all patch generators are based on
learnable priors, which aim to capture the information of geometry primitives.
We introduce point- and patch-wise transformers to enable the interactions
between points and patches. Therefore, the proposed divide-and-conquer approach
contributes to a new understanding of point cloud generation from the geometry
constitution of 3d shapes. Experimental results on a variety of object
categories from the most popular point cloud dataset, ShapeNet, show the
effectiveness of the proposed patch-wise point cloud generation, where it
clearly outperforms recent state-of-the-art methods for high-fidelity point
cloud generation.
Related papers
- PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - DualGenerator: Information Interaction-based Generative Network for
Point Cloud Completion [25.194587599472147]
Point cloud completion estimates complete shapes from incomplete point clouds to obtain higher-quality point cloud data.
Most existing methods only consider global object features, ignoring spatial and semantic information of adjacent points.
We propose an information interaction-based generative network for point cloud completion.
arXiv Detail & Related papers (2023-05-16T03:25:38Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.