Learning Inverse Rendering of Faces from Real-world Videos
- URL: http://arxiv.org/abs/2003.12047v1
- Date: Thu, 26 Mar 2020 17:26:40 GMT
- Title: Learning Inverse Rendering of Faces from Real-world Videos
- Authors: Yuda Qiu, Zhangyang Xiong, Kai Han, Zhongyuan Wang, Zixiang Xiong,
Xiaoguang Han
- Abstract summary: Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
- Score: 52.313931830408386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we examine the problem of inverse rendering of real face
images. Existing methods decompose a face image into three components (albedo,
normal, and illumination) by supervised training on synthetic face data.
However, due to the domain gap between real and synthetic face images, a model
trained on synthetic data often does not generalize well to real data.
Meanwhile, since no ground truth for any component is available for real
images, it is not feasible to conduct supervised learning on real face images.
To alleviate this problem, we propose a weakly supervised training approach to
train our model on real face videos, based on the assumption of consistency of
albedo and normal across different frames, thus bridging the gap between real
and synthetic face images. In addition, we introduce a learning framework,
called IlluRes-SfSNet, to further extract the residual map to capture the
global illumination effects that give the fine details that are largely ignored
in existing methods. Our network is trained on both real and synthetic data,
benefiting from both. We comprehensively evaluate our methods on various
benchmarks, obtaining better inverse rendering results than the
state-of-the-art.
Related papers
- Digi2Real: Bridging the Realism Gap in Synthetic Data Face Recognition via Foundation Models [4.910937238451485]
We introduce a novel framework for realism transfer aimed at enhancing the realism of synthetically generated face images.
By integrating the controllable aspects of the graphics pipeline with our realism enhancement technique, we generate a large amount of realistic variations.
arXiv Detail & Related papers (2024-11-04T15:42:22Z) - Face Inverse Rendering via Hierarchical Decoupling [19.530753479268384]
Previous face inverse rendering methods often require synthetic data with ground truth and/or professional equipment like a lighting stage.
We propose a deep learning framework to disentangle face images in the wild into their corresponding albedo, normal, and lighting components.
arXiv Detail & Related papers (2023-01-17T07:24:47Z) - From Face to Natural Image: Learning Real Degradation for Blind Image
Super-Resolution [72.68156760273578]
We design training pairs for super-resolving the real-world low-quality (LQ) images.
We take paired HQ and LQ face images as inputs to explicitly predict degradation-aware and content-independent representations.
We then transfer these real degradation representations from face to natural images to synthesize the degraded LQ natural images.
arXiv Detail & Related papers (2022-10-03T08:09:21Z) - FaceEraser: Removing Facial Parts for Augmented Reality [10.575917056215289]
Our task is to remove all facial parts and then impose visual elements onto the blank'' face for augmented reality.
We propose a novel data generation technique to produce paired training data that well mimic the blank'' faces.
Our method has been integrated into commercial products and its effectiveness has been verified with unconstrained user inputs.
arXiv Detail & Related papers (2021-09-22T14:30:12Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.