Discrete Point Flow Networks for Efficient Point Cloud Generation
- URL: http://arxiv.org/abs/2007.10170v1
- Date: Mon, 20 Jul 2020 14:48:00 GMT
- Title: Discrete Point Flow Networks for Efficient Point Cloud Generation
- Authors: Roman Klokov, Edmond Boyer, Jakob Verbeek
- Abstract summary: Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
- Score: 36.03093265136374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models have proven effective at modeling 3D shapes and their
statistical variations. In this paper we investigate their application to point
clouds, a 3D shape representation widely used in computer vision for which,
however, only few generative models have yet been proposed. We introduce a
latent variable model that builds on normalizing flows with affine coupling
layers to generate 3D point clouds of an arbitrary size given a latent shape
representation. To evaluate its benefits for shape modeling we apply this model
for generation, autoencoding, and single-view shape reconstruction tasks. We
improve over recent GAN-based models in terms of most metrics that assess
generation and autoencoding. Compared to recent work based on continuous flows,
our model offers a significant speedup in both training and inference times for
similar or better performance. For single-view shape reconstruction we also
obtain results on par with state-of-the-art voxel, point cloud, and mesh-based
methods.
Related papers
- Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models [6.795447206159906]
We propose a novel point cloud U-Net diffusion architecture for 3D generative modeling.
Our network employs a dual-branch architecture, combining the high-resolution representations of points with the computational efficiency of sparse voxels.
Our model excels in all tasks, establishing it as a state-of-the-art diffusion U-Net for point cloud generative modeling.
arXiv Detail & Related papers (2024-08-12T13:41:47Z) - Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Generative Models for 3D Point Clouds [1.2043574473965317]
We aim to improve the performance of point cloud latent-space generative models by experimenting with transformer encoders, latent-space flow models, and autoregressive decoders.
We analyze and compare both generation and reconstruction performance of these models on various object types.
arXiv Detail & Related papers (2023-02-26T21:34:19Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.