SDF-3DGAN: A 3D Object Generative Method Based on Implicit Signed
Distance Function
- URL: http://arxiv.org/abs/2303.06821v1
- Date: Mon, 13 Mar 2023 02:48:54 GMT
- Title: SDF-3DGAN: A 3D Object Generative Method Based on Implicit Signed
Distance Function
- Authors: Lutao Jiang, Ruyi Ji, Libo Zhang
- Abstract summary: We develop a new method, SDF-3DGAN, for 3D object generation and 3D-Aware image tasks.
We apply SDF for higher quality representation of 3D object in space and design a new SDF neural, which has higher efficiency and higher accuracy.
- Score: 10.199463450025391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop a new method, termed SDF-3DGAN, for 3D object
generation and 3D-Aware image synthesis tasks, which introduce implicit Signed
Distance Function (SDF) as the 3D object representation method in the
generative field. We apply SDF for higher quality representation of 3D object
in space and design a new SDF neural renderer, which has higher efficiency and
higher accuracy. To train only on 2D images, we first generate the objects,
which are represented by SDF, from Gaussian distribution. Then we render them
to 2D images and use them to apply GAN training method together with 2D images
in the dataset. In the new rendering method, we relieve all the potential of
SDF mathematical property to alleviate computation pressure in the previous SDF
neural renderer. In specific, our new SDF neural renderer can solve the problem
of sampling ambiguity when the number of sampling point is not enough, \ie use
the less points to finish higher quality sampling task in the rendering
pipeline. And in this rendering pipeline, we can locate the surface easily.
Therefore, we apply normal loss on it to control the smoothness of generated
object surface, which can make our method enjoy the much higher generation
quality. Quantitative and qualitative experiments conducted on public
benchmarks demonstrate favorable performance against the state-of-the-art
methods in 3D object generation task and 3D-Aware image synthesis task. Our
codes will be released at https://github.com/lutao2021/SDF-3DGAN.
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - What You See is What You GAN: Rendering Every Pixel for High-Fidelity
Geometry in 3D GANs [82.3936309001633]
3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries.
Yet, the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution.
We propose techniques to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail.
arXiv Detail & Related papers (2024-01-04T18:50:38Z) - Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D
priors [16.93758384693786]
Bidirectional Diffusion(BiDiff) is a unified framework that incorporates both a 3D and a 2D diffusion process.
Our model achieves high-quality, diverse, and scalable 3D generation.
arXiv Detail & Related papers (2023-12-07T10:00:04Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - GRAM-HD: 3D-Consistent Image Generation at High Resolution with
Generative Radiance Manifolds [28.660893916203747]
This paper proposes a novel 3D-aware GAN that can generate high resolution images (up to 1024X1024) while keeping strict 3D consistency as in volume rendering.
Our motivation is to achieve super-resolution directly in the 3D space to preserve 3D consistency.
Experiments on FFHQ and AFHQv2 datasets show that our method can produce high-quality 3D-consistent results.
arXiv Detail & Related papers (2022-06-15T02:35:51Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation [34.01352591390208]
We introduce a high resolution, 3D-consistent image and shape generation technique which we call StyleSDF.
Our method is trained on single-view RGB data only, and stands on the shoulders of StyleGAN2 for image generation.
arXiv Detail & Related papers (2021-12-21T18:45:45Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.