Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision
- URL: http://arxiv.org/abs/2008.06843v2
- Date: Wed, 9 Sep 2020 12:51:26 GMT
- Title: Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision
- Authors: Yuxiang Wei, Ming Liu, Haolin Wang, Ruifeng Zhu, Guosheng Hu, Wangmeng
Zuo
- Abstract summary: Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
- Score: 73.18554605744842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent advances in deep learning-based face frontalization methods,
photo-realistic and illumination preserving frontal face synthesis is still
challenging due to large pose and illumination discrepancy during training. We
propose a novel Flow-based Feature Warping Model (FFWM) which can learn to
synthesize photo-realistic and illumination preserving frontal images with
illumination inconsistent supervision. Specifically, an Illumination Preserving
Module (IPM) is proposed to learn illumination preserving image synthesis from
illumination inconsistent image pairs. IPM includes two pathways which
collaborate to ensure the synthesized frontal images are illumination
preserving and with fine details. Moreover, a Warp Attention Module (WAM) is
introduced to reduce the pose discrepancy in the feature level, and hence to
synthesize frontal images more effectively and preserve more details of profile
images. The attention mechanism in WAM helps reduce the artifacts caused by the
displacements between the profile and the frontal images. Quantitative and
qualitative experimental results show that our FFWM can synthesize
photo-realistic and illumination preserving frontal images and performs
favorably against the state-of-the-art results.
Related papers
- DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Decoupled Low-light Image Enhancement [21.111831640136835]
We propose to decouple the enhancement model into two sequential stages.
The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping.
The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors.
arXiv Detail & Related papers (2021-11-29T11:15:38Z) - Intrinsic Image Transfer for Illumination Manipulation [1.2387676601792899]
This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
arXiv Detail & Related papers (2021-07-01T19:12:24Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - Copy and Paste GAN: Face Hallucination from Shaded Thumbnails [45.98561483932554]
This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images.
Our method manifests authentic HR face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively and quantitatively.
arXiv Detail & Related papers (2020-02-25T03:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.