Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent
Portrait Synthesis from Monocular Image
- URL: http://arxiv.org/abs/2211.13901v2
- Date: Mon, 20 Mar 2023 09:07:21 GMT
- Title: Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent
Portrait Synthesis from Monocular Image
- Authors: Yu Deng, Baoyuan Wang, Heung-Yeung Shum
- Abstract summary: A key challenge for novel view synthesis of monocular portrait images is 3D consistency under continuous pose variations.
We present a 3D-consistent novel view synthesis approach for monocular portrait images based on a proposed 3D-aware GAN.
- Score: 17.742602375370407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key challenge for novel view synthesis of monocular portrait images is 3D
consistency under continuous pose variations. Most existing methods rely on 2D
generative models which often leads to obvious 3D inconsistency artifacts. We
present a 3D-consistent novel view synthesis approach for monocular portrait
images based on a recent proposed 3D-aware GAN, namely Generative Radiance
Manifolds (GRAM), which has shown strong 3D consistency at multiview image
generation of virtual subjects via the radiance manifolds representation.
However, simply learning an encoder to map a real image into the latent space
of GRAM can only reconstruct coarse radiance manifolds without faithful fine
details, while improving the reconstruction fidelity via instance-specific
optimization is time-consuming. We introduce a novel detail manifolds
reconstructor to learn 3D-consistent fine details on the radiance manifolds
from monocular images, and combine them with the coarse radiance manifolds for
high-fidelity reconstruction. The 3D priors derived from the coarse radiance
manifolds are used to regulate the learned details to ensure reasonable
synthesized results at novel views. Trained on in-the-wild 2D images, our
method achieves high-fidelity and 3D-consistent portrait synthesis largely
outperforming the prior art.
Related papers
- 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.
We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - FDGaussian: Fast Gaussian Splatting from Single Image via Geometric-aware Diffusion Model [81.03553265684184]
We introduce FDGaussian, a novel two-stage framework for single-image 3D reconstruction.
Recent methods typically utilize pre-trained 2D diffusion models to generate plausible novel views from the input image.
We demonstrate that FDGaussian generates images with high consistency across different views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - GRAM-HD: 3D-Consistent Image Generation at High Resolution with
Generative Radiance Manifolds [28.660893916203747]
This paper proposes a novel 3D-aware GAN that can generate high resolution images (up to 1024X1024) while keeping strict 3D consistency as in volume rendering.
Our motivation is to achieve super-resolution directly in the 3D space to preserve 3D consistency.
Experiments on FFHQ and AFHQv2 datasets show that our method can produce high-quality 3D-consistent results.
arXiv Detail & Related papers (2022-06-15T02:35:51Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z) - GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis [43.4859484191223]
We propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene.
By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone.
arXiv Detail & Related papers (2020-07-05T20:37:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.