3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion
Process
- URL: http://arxiv.org/abs/2303.10406v1
- Date: Sat, 18 Mar 2023 12:50:29 GMT
- Title: 3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion
Process
- Authors: Yuhan Li, Yishun Dou, Xuanhong Chen, Bingbing Ni, Yilin Sun, Yutian
Liu, Fuzhen Wang
- Abstract summary: We develop a generalized 3D shape generation prior model tailored for multiple 3D tasks.
Designs jointly equip our proposed 3D shape prior model with high-fidelity, diverse features as well as the capability of cross-modality alignment.
- Score: 32.3773514247982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a generalized 3D shape generation prior model, tailored for
multiple 3D tasks including unconditional shape generation, point cloud
completion, and cross-modality shape generation, etc. On one hand, to precisely
capture local fine detailed shape information, a vector quantized variational
autoencoder (VQ-VAE) is utilized to index local geometry from a compactly
learned codebook based on a broad set of task training data. On the other hand,
a discrete diffusion generator is introduced to model the inherent structural
dependencies among different tokens. In the meantime, a multi-frequency fusion
module (MFM) is developed to suppress high-frequency shape feature
fluctuations, guided by multi-frequency contextual information. The above
designs jointly equip our proposed 3D shape prior model with high-fidelity,
diverse features as well as the capability of cross-modality alignment, and
extensive experiments have demonstrated superior performances on various 3D
shape generation tasks.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - Point Cloud Generation with Continuous Conditioning [2.9238500578557303]
We propose a novel generative adversarial network (GAN) setup that generates 3D point cloud shapes conditioned on a continuous parameter.
In an exemplary application, we use this to guide the generative process to create a 3D object with a custom-fit shape.
arXiv Detail & Related papers (2022-02-17T09:05:10Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.