Class-Continuous Conditional Generative Neural Radiance Field
- URL: http://arxiv.org/abs/2301.00950v3
- Date: Tue, 9 Jan 2024 06:54:15 GMT
- Title: Class-Continuous Conditional Generative Neural Radiance Field
- Authors: Jiwook Kim and Minhyeok Lee
- Abstract summary: We introduce a novel model, called Class-Continuous Generative NeRF ($textC3$G-NeRF), which can synthesize conditionally manipulated 3D-consistent images.
Our model shows strong 3D-consistency with fine details and smooth in conditional feature manipulation.
- Score: 4.036530158875673
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The 3D-aware image synthesis focuses on conserving spatial consistency
besides generating high-resolution images with fine details. Recently, Neural
Radiance Field (NeRF) has been introduced for synthesizing novel views with low
computational cost and superior performance. While several works investigate a
generative NeRF and show remarkable achievement, they cannot handle conditional
and continuous feature manipulation in the generation procedure. In this work,
we introduce a novel model, called Class-Continuous Conditional Generative NeRF
($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated
photorealistic 3D-consistent images by projecting conditional features to the
generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated
with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows
strong 3D-consistency with fine details and smooth interpolation in conditional
feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet
Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a
$\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated
3D-aware images of each class of the datasets as it is possible to synthesize
class-conditional images with $\text{C}^{3}$G-NeRF.
Related papers
- ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative
Neural Radiance Fields [2.458437232470188]
We introduce ZIGNeRF, an innovative model that executes zero-shot Generative Adrial Network (GAN)versa for the generation of multi-view images from a single out-of-domain image.
ZIGNeRF is capable of disentangling the object from the background and executing 3D operations such as 360-degree rotation or depth and horizontal translation.
arXiv Detail & Related papers (2023-06-05T09:41:51Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real
Image Animation [66.0838349951456]
Nerf-based Generative models have shown impressive capacity in generating high-quality images with consistent 3D geometry.
We propose a universal method to surgically fine-tune these NeRF-GAN models in order to achieve high-fidelity animation of real subjects only by a single image.
arXiv Detail & Related papers (2022-11-30T18:36:45Z) - Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures [72.44361273600207]
We adapt the score distillation to the publicly available, and computationally efficient, Latent Diffusion Models.
Latent Diffusion Models apply the entire diffusion process in a compact latent space of a pretrained autoencoder.
We show that latent score distillation can be successfully applied directly on 3D meshes.
arXiv Detail & Related papers (2022-11-14T18:25:24Z) - Pix2NeRF: Unsupervised Conditional $\pi$-GAN for Single Image to Neural
Radiance Fields Translation [93.77693306391059]
We propose a pipeline to generate Neural Radiance Fields(NeRF) of an object or a scene of a specific class, conditioned on a single input image.
Our method is based on $pi$-GAN, a generative model for unconditional 3D-aware image synthesis.
arXiv Detail & Related papers (2022-02-26T15:28:05Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent
Pixel Synthesis [148.4104739574094]
This paper presents CIPS-3D, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation network.
The generator synthesizes each pixel value independently without any spatial convolution or upsampling operation.
It sets new records for 3D-aware image synthesis with an impressive FID of 6.97 for images at the $256times256$ resolution on FFHQ.
arXiv Detail & Related papers (2021-10-19T08:02:16Z) - pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware
Image Synthesis [45.51447644809714]
We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks ($pi$-GAN or pi-GAN) for high-quality 3D-aware image synthesis.
The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.
arXiv Detail & Related papers (2020-12-02T01:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.