SurfGen: Adversarial 3D Shape Synthesis with Explicit Surface
Discriminators
- URL: http://arxiv.org/abs/2201.00112v1
- Date: Sat, 1 Jan 2022 04:44:42 GMT
- Title: SurfGen: Adversarial 3D Shape Synthesis with Explicit Surface
Discriminators
- Authors: Andrew Luo, Tianqin Li, Wen-Hao Zhang, Tai Sing Lee
- Abstract summary: We present a 3D shape synthesis framework (SurfGen) that directly applies adversarial training to the object surface.
Our approach uses a differentiable spherical projection layer to capture and represent the explicit zero isosurface of an implicit 3D generator as functions defined on the unit sphere.
We evaluate our model on large-scale shape datasets, and demonstrate that the end-to-end trained model is capable of generating high fidelity 3D shapes with diverse topology.
- Score: 5.575197901329888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep generative models have led to immense progress in 3D
shape synthesis. While existing models are able to synthesize shapes
represented as voxels, point-clouds, or implicit functions, these methods only
indirectly enforce the plausibility of the final 3D shape surface. Here we
present a 3D shape synthesis framework (SurfGen) that directly applies
adversarial training to the object surface. Our approach uses a differentiable
spherical projection layer to capture and represent the explicit zero
isosurface of an implicit 3D generator as functions defined on the unit sphere.
By processing the spherical representation of 3D object surfaces with a
spherical CNN in an adversarial setting, our generator can better learn the
statistics of natural shape surfaces. We evaluate our model on large-scale
shape datasets, and demonstrate that the end-to-end trained model is capable of
generating high fidelity 3D shapes with diverse topology.
Related papers
- 3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis [49.352765055181436]
We propose a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis.
Our solution achieves 3D geometry-aware deformation modeling, which enables improved dynamic view synthesis and 3D dynamic reconstruction.
arXiv Detail & Related papers (2024-04-09T12:47:30Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - 3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models [8.583859530633417]
We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder.
This allows us to generate diverse and high quality 3D surfaces.
arXiv Detail & Related papers (2022-12-01T20:00:00Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis
and Analysis [143.22192229456306]
This paper proposes a deep 3D energy-based model to represent volumetric shapes.
The benefits of the proposed model are six-fold.
Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns.
arXiv Detail & Related papers (2020-12-25T06:09:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.