Adversarial Generation of Continuous Implicit Shape Representations
- URL: http://arxiv.org/abs/2002.00349v2
- Date: Thu, 26 Mar 2020 07:45:33 GMT
- Title: Adversarial Generation of Continuous Implicit Shape Representations
- Authors: Marian Kleineberg, Matthias Fey, Frank Weichert
- Abstract summary: This work presents a generative adversarial architecture for generating 3D shapes based on signed distance representations.
We train our approach on the ShapeNet benchmark dataset and validate, both quantitatively and qualitatively, its performance in generating realistic 3D shapes.
- Score: 9.478108870211365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents a generative adversarial architecture for generating
three-dimensional shapes based on signed distance representations. While the
deep generation of shapes has been mostly tackled by voxel and surface point
cloud approaches, our generator learns to approximate the signed distance for
any point in space given prior latent information. Although structurally
similar to generative point cloud approaches, this formulation can be evaluated
with arbitrary point density during inference, leading to fine-grained details
in generated outputs. Furthermore, we study the effects of using either
progressively growing voxel- or point-processing networks as discriminators,
and propose a refinement scheme to strengthen the generator's capabilities in
modeling the zero iso-surface decision boundary of shapes. We train our
approach on the ShapeNet benchmark dataset and validate, both quantitatively
and qualitatively, its performance in generating realistic 3D shapes.
Related papers
- StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.