Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation
- URL: http://arxiv.org/abs/2302.00190v1
- Date: Wed, 1 Feb 2023 02:47:53 GMT
- Title: Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation
- Authors: Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Ruihui Li and Chi-Wing Fu
- Abstract summary: We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
- Score: 54.09274684734721
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a new approach for 3D shape generation, inversion, and
manipulation, through a direct generative modeling on a continuous implicit
representation in wavelet domain. Specifically, we propose a compact wavelet
representation with a pair of coarse and detail coefficient volumes to
implicitly represent 3D shapes via truncated signed distance functions and
multi-scale biorthogonal wavelets. Then, we design a pair of neural networks: a
diffusion-based generator to produce diverse shapes in the form of the coarse
coefficient volumes and a detail predictor to produce compatible detail
coefficient volumes for introducing fine structures and details. Further, we
may jointly train an encoder network to learn a latent space for inverting
shapes, allowing us to enable a rich variety of whole-shape and region-aware
shape manipulations. Both quantitative and qualitative experimental results
manifest the compelling shape generation, inversion, and manipulation
capabilities of our approach over the state-of-the-art methods.
Related papers
- Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields [50.12118098874321]
We introduce a latent 3D diffusion process for neural voxel fields, enabling generation at significantly higher resolutions.
A part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding the accurate part decomposition.
The results demonstrate the superior generative capabilities of our proposed method in part-aware shape generation, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-05-02T04:31:17Z) - Topology-Aware Latent Diffusion for 3D Shape Generation [20.358373670117537]
We introduce a new generative model that combines latent diffusion with persistent homology to create 3D shapes with high diversity.
Our method involves representing 3D shapes as implicit fields, then employing persistent homology to extract topological features.
arXiv Detail & Related papers (2024-01-31T05:13:53Z) - PolyDiff: Generating 3D Polygonal Meshes with Diffusion Models [15.846449180313778]
PolyDiff is the first diffusion-based approach capable of directly generating realistic and diverse 3D polygonal meshes.
Our model is capable of producing high-quality 3D polygonal meshes, ready for integration into downstream 3D.
arXiv Detail & Related papers (2023-12-18T18:19:26Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - 3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion
Process [32.3773514247982]
We develop a generalized 3D shape generation prior model tailored for multiple 3D tasks.
Designs jointly equip our proposed 3D shape prior model with high-fidelity, diverse features as well as the capability of cross-modality alignment.
arXiv Detail & Related papers (2023-03-18T12:50:29Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - 3D Shape Generation and Completion through Point-Voxel Diffusion [24.824065748889048]
We propose a novel approach for probabilistic generative modeling of 3D shapes.
Point-Voxel Diffusion (PVD) is a unified, probabilistic formulation for unconditional shape generation and conditional, multimodal shape completion.
PVD can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function.
arXiv Detail & Related papers (2021-04-08T10:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.