3DShape2VecSet: A 3D Shape Representation for Neural Fields and
Generative Diffusion Models
- URL: http://arxiv.org/abs/2301.11445v3
- Date: Mon, 1 May 2023 22:19:24 GMT
- Title: 3DShape2VecSet: A 3D Shape Representation for Neural Fields and
Generative Diffusion Models
- Authors: Biao Zhang, Jiapeng Tang, Matthias Niessner, Peter Wonka
- Abstract summary: We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models.
Our results show improved performance in 3D shape encoding and 3D shape generative modeling tasks.
- Score: 42.928400751670935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce 3DShape2VecSet, a novel shape representation for neural fields
designed for generative diffusion models. Our shape representation can encode
3D shapes given as surface models or point clouds, and represents them as
neural fields. The concept of neural fields has previously been combined with a
global latent vector, a regular grid of latent vectors, or an irregular grid of
latent vectors. Our new representation encodes neural fields on top of a set of
vectors. We draw from multiple concepts, such as the radial basis function
representation and the cross attention and self-attention function, to design a
learnable representation that is especially suitable for processing with
transformers. Our results show improved performance in 3D shape encoding and 3D
shape generative modeling tasks. We demonstrate a wide variety of generative
applications: unconditioned generation, category-conditioned generation,
text-conditioned generation, point-cloud completion, and image-conditioned
generation.
Related papers
- MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling [55.05713977022407]
We introduce a radiance representation that is both structured and fully explicit and thus greatly facilitates 3D generative modeling.
We derive GaussianCube by first using a novel densification-constrained Gaussian fitting algorithm, which yields high-accuracy fitting.
Experiments conducted on unconditional and class-conditioned object generation, digital avatar creation, and text-to-3D all show that our model synthesis achieves state-of-the-art generation results.
arXiv Detail & Related papers (2024-03-28T17:59:50Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - FullFormer: Generating Shapes Inside Shapes [9.195909458772187]
We present the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details.
Our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data.
We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
arXiv Detail & Related papers (2023-03-20T16:19:23Z) - 3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models [8.583859530633417]
We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder.
This allows us to generate diverse and high quality 3D surfaces.
arXiv Detail & Related papers (2022-12-01T20:00:00Z) - 3D Neural Field Generation using Triplane Diffusion [37.46688195622667]
We present an efficient diffusion-based model for 3D-aware generation of neural fields.
Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields.
We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
arXiv Detail & Related papers (2022-11-30T01:55:52Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - 3DILG: Irregular Latent Grids for 3D Generative Modeling [44.16807313707137]
We propose a new representation for encoding 3D shapes as neural fields.
The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation.
arXiv Detail & Related papers (2022-05-27T11:29:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.