Novel View Synthesis for High-fidelity Headshot Scenes
- URL: http://arxiv.org/abs/2205.15595v1
- Date: Tue, 31 May 2022 08:14:15 GMT
- Title: Novel View Synthesis for High-fidelity Headshot Scenes
- Authors: Satoshi Tsutsui, Weijia Mao, Sijing Lin, Yunyi Zhu, Murong Ma, Mike
Zheng Shou
- Abstract summary: We find that NeRF can render new views while maintaining geometric consistency, but it does not properly maintain skin details, such as moles and pores.
We propose a method to use both NeRF and 3DMM to synthesize a high-fidelity novel view of a scene with a face.
- Score: 5.33510552066148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rendering scenes with a high-quality human face from arbitrary viewpoints is
a practical and useful technique for many real-world applications. Recently,
Neural Radiance Fields (NeRF), a rendering technique that uses neural networks
to approximate classical ray tracing, have been considered as one of the
promising approaches for synthesizing novel views from a sparse set of images.
We find that NeRF can render new views while maintaining geometric consistency,
but it does not properly maintain skin details, such as moles and pores. These
details are important particularly for faces because when we look at an image
of a face, we are much more sensitive to details than when we look at other
objects. On the other hand, 3D Morpable Models (3DMMs) based on traditional
meshes and textures can perform well in terms of skin detail despite that it
has less precise geometry and cannot cover the head and the entire scene with
background. Based on these observations, we propose a method to use both NeRF
and 3DMM to synthesize a high-fidelity novel view of a scene with a face. Our
method learns a Generative Adversarial Network (GAN) to mix a NeRF-synthesized
image and a 3DMM-rendered image and produces a photorealistic scene with a face
preserving the skin details. Experiments with various real-world scenes
demonstrate the effectiveness of our approach. The code will be available on
https://github.com/showlab/headshot .
Related papers
- Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures [33.463245327698]
We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling.
We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions.
We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject.
arXiv Detail & Related papers (2024-10-01T12:24:50Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head
Synthesis [90.43371339871105]
We propose Dynamic Facial Radiance Fields (DFRF) for few-shot talking head synthesis.
DFRF conditions face radiance field on 2D appearance images to learn the face prior.
Experiments show DFRF can synthesize natural and high-quality audio-driven talking head videos for novel identities with only 40k iterations.
arXiv Detail & Related papers (2022-07-24T16:46:03Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - MoFaNeRF: Morphable Facial Neural Radiance Field [12.443638713719357]
MoFaNeRF is a parametric model that maps free-view images into a vector space coded facial shape, expression and appearance.
By introducing identity-specific modulation and encoder texture, our model synthesizes accurate photometric details.
Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view.
arXiv Detail & Related papers (2021-12-04T11:25:28Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.