PointInverter: Point Cloud Reconstruction and Editing via a Generative
Model with Shape Priors
- URL: http://arxiv.org/abs/2211.08702v1
- Date: Wed, 16 Nov 2022 06:29:29 GMT
- Title: PointInverter: Point Cloud Reconstruction and Editing via a Generative
Model with Shape Priors
- Authors: Jaeyeon Kim, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung
- Abstract summary: We propose a new method for mapping a 3D point cloud to the latent space of a 3D generative adversarial network.
Our method outperforms previous GAN inversion methods for 3D point clouds.
- Score: 25.569519066857705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new method for mapping a 3D point cloud to the
latent space of a 3D generative adversarial network. Our generative model for
3D point clouds is based on SP-GAN, a state-of-the-art sphere-guided 3D point
cloud generator. We derive an efficient way to encode an input 3D point cloud
to the latent space of the SP-GAN. Our point cloud encoder can resolve the
point ordering issue during inversion, and thus can determine the
correspondences between points in the generated 3D point cloud and those in the
canonical sphere used by the generator. We show that our method outperforms
previous GAN inversion methods for 3D point clouds, achieving state-of-the-art
results both quantitatively and qualitatively. Our code is available at
https://github.com/hkust-vgd/point_inverter.
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - CloudWalker: Random walks for 3D point cloud shape analysis [20.11028799145883]
We propose CloudWalker, a novel method for learning 3D shapes using random walks.
Our approach achieves state-of-the-art results for two 3D shape analysis tasks: classification and retrieval.
arXiv Detail & Related papers (2021-12-02T08:24:01Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z) - Hypernetwork approach to generating point clouds [18.67883065951206]
We build a hyper network that returns weights of a particular neural network trained to map points into a 3D shape.
A particular 3D shape can be generated using point-by-point sampling from the assumed prior distribution.
Since the hyper network is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered a parametrization of the surface of a 3D shape.
arXiv Detail & Related papers (2020-02-10T11:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.