Learning Effective NeRFs and SDFs Representations with 3D Generative
Adversarial Networks for 3D Object Generation: Technical Report for ICCV 2023
OmniObject3D Challenge
- URL: http://arxiv.org/abs/2309.16110v1
- Date: Thu, 28 Sep 2023 02:23:46 GMT
- Title: Learning Effective NeRFs and SDFs Representations with 3D Generative
Adversarial Networks for 3D Object Generation: Technical Report for ICCV 2023
OmniObject3D Challenge
- Authors: Zheyuan Yang, Yibo Liu, Guile Wu, Tongtong Cao, Yuan Ren, Yang Liu,
Bingbing Liu
- Abstract summary: We present a solution for 3D object generation of ICCV 2023 OmniObject3D Challenge.
We study learning effective NeRFs and SDFs representations with 3D Generative Adversarial Networks (GANs) for 3D object generation.
- Score: 28.425698777641482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this technical report, we present a solution for 3D object generation of
ICCV 2023 OmniObject3D Challenge. In recent years, 3D object generation has
made great process and achieved promising results, but it remains a challenging
task due to the difficulty of generating complex, textured and high-fidelity
results. To resolve this problem, we study learning effective NeRFs and SDFs
representations with 3D Generative Adversarial Networks (GANs) for 3D object
generation. Specifically, inspired by recent works, we use the efficient
geometry-aware 3D GANs as the backbone incorporating with label embedding and
color mapping, which enables to train the model on different taxonomies
simultaneously. Then, through a decoder, we aggregate the resulting features to
generate Neural Radiance Fields (NeRFs) based representations for rendering
high-fidelity synthetic images. Meanwhile, we optimize Signed Distance
Functions (SDFs) to effectively represent objects with 3D meshes. Besides, we
observe that this model can be effectively trained with only a few images of
each object from a variety of classes, instead of using a great number of
images per object or training one model per class. With this pipeline, we can
optimize an effective model for 3D object generation. This solution is one of
the final top-3-place solutions in the ICCV 2023 OmniObject3D Challenge.
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance [76.7746870349809]
We present ComboVerse, a 3D generation framework that produces high-quality 3D assets with complex compositions by learning to combine multiple models.
Our proposed framework emphasizes spatial alignment of objects, compared with standard score distillation sampling.
arXiv Detail & Related papers (2024-03-19T03:39:43Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - 3D Neural Field Generation using Triplane Diffusion [37.46688195622667]
We present an efficient diffusion-based model for 3D-aware generation of neural fields.
Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields.
We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
arXiv Detail & Related papers (2022-11-30T01:55:52Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - Efficient Geometry-aware 3D Generative Adversarial Networks [50.68436093869381]
Existing 3D GANs are either compute-intensive or make approximations that are not 3D-consistent.
In this work, we improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations.
We introduce an expressive hybrid explicit-implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry.
arXiv Detail & Related papers (2021-12-15T08:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.