Inverting Generative Adversarial Renderer for Face Reconstruction
- URL: http://arxiv.org/abs/2105.02431v2
- Date: Sat, 8 May 2021 04:44:34 GMT
- Title: Inverting Generative Adversarial Renderer for Face Reconstruction
- Authors: Jingtan Piao, Keqiang Sun, KwanYee Lin, Quan Wang, Hongsheng Li
- Abstract summary: In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
- Score: 58.45125455811038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a monocular face image as input, 3D face geometry reconstruction aims
to recover a corresponding 3D face mesh. Recently, both optimization-based and
learning-based face reconstruction methods have taken advantage of the emerging
differentiable renderer and shown promising results. However, the
differentiable renderer, mainly based on graphics rules, simplifies the
realistic mechanism of the illumination, reflection, \etc, of the real world,
thus cannot produce realistic images. This brings a lot of domain-shift noise
to the optimization or training process. In this work, we introduce a novel
Generative Adversarial Renderer (GAR) and propose to tailor its inverted
version to the general fitting pipeline, to tackle the above problem.
Specifically, the carefully designed neural renderer takes a face normal map
and a latent code representing other factors as inputs and renders a realistic
face image. Since the GAR learns to model the complicated real-world image,
instead of relying on the simplified graphics rules, it is capable of producing
realistic images, which essentially inhibits the domain-shift noise in training
and optimization. Equipped with the elaborated GAR, we further proposed a novel
approach to predict 3D face parameters, in which we first obtain fine initial
parameters via Renderer Inverting and then refine it with gradient-based
optimizers. Extensive experiments have been conducted to demonstrate the
effectiveness of the proposed generative adversarial renderer and the novel
optimization-based face reconstruction framework. Our method achieves
state-of-the-art performances on multiple face reconstruction datasets.
Related papers
- Learning Topology Uniformed Face Mesh by Volume Rendering for Multi-view Reconstruction [40.45683488053611]
Face meshes in consistent topology serve as the foundation for many face-related applications.
We propose a mesh volume rendering method that enables directly optimizing mesh geometry while preserving topology.
Key innovation lies in spreading sparse mesh features into the surrounding space to simulate radiance field required for volume rendering.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - 3D Facial Expressions through Analysis-by-Neural-Synthesis [30.2749903946587]
SMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics) faithfully reconstructs expressive 3D faces from images.
We identify two key limitations in existing methods: shortcomings in their self-supervised training formulation, and a lack of expression diversity in the training images.
Our qualitative, quantitative and particularly our perceptual evaluations demonstrate that SMIRK achieves the new state-of-the art performance on accurate expression reconstruction.
arXiv Detail & Related papers (2024-04-05T14:00:07Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.