Copy and Paste GAN: Face Hallucination from Shaded Thumbnails
- URL: http://arxiv.org/abs/2002.10650v3
- Date: Thu, 19 Mar 2020 07:31:34 GMT
- Title: Copy and Paste GAN: Face Hallucination from Shaded Thumbnails
- Authors: Yang Zhang, Ivor Tsang, Yawei Luo, Changhui Hu, Xiaobo Lu, Xin Yu
- Abstract summary: This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images.
Our method manifests authentic HR face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively and quantitatively.
- Score: 45.98561483932554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing face hallucination methods based on convolutional neural networks
(CNN) have achieved impressive performance on low-resolution (LR) faces in a
normal illumination condition. However, their performance degrades dramatically
when LR faces are captured in low or non-uniform illumination conditions. This
paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to
recover authentic high-resolution (HR) face images while compensating for low
and non-uniform illumination. To this end, we develop two key components in our
CPGAN: internal and external Copy and Paste nets (CPnets). Specifically, our
internal CPnet exploits facial information residing in the input image to
enhance facial details; while our external CPnet leverages an external HR face
for illumination compensation. A new illumination compensation loss is thus
developed to capture illumination from the external guided face image
effectively. Furthermore, our method offsets illumination and upsamples facial
details alternately in a coarse-to-fine fashion, thus alleviating the
correspondence ambiguity between LR inputs and external HR inputs. Extensive
experiments demonstrate that our method manifests authentic HR face images in a
uniform illumination condition and outperforms state-of-the-art methods
qualitatively and quantitatively.
Related papers
- Photometric Inverse Rendering: Shading Cues Modeling and Surface Reflectance Regularization [46.146783750386994]
We propose a new method for neural inverse rendering.
Our method jointly optimize the light source position to account for the self-shadows in images.
To enhance surface reflectance decomposition, we introduce a new regularization.
arXiv Detail & Related papers (2024-08-13T11:39:14Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Intrinsic Image Transfer for Illumination Manipulation [1.2387676601792899]
This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
arXiv Detail & Related papers (2021-07-01T19:12:24Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - Adaptive Multiscale Illumination-Invariant Feature Representation for
Undersampled Face Recognition [29.002873450422083]
This paper presents an illumination-invariant feature representation approach used to eliminate the varying illumination affection in undersampled face recognition.
A new illumination level classification technique based on Singular Value Decomposition (SVD) is proposed to judge the illumination level of input image.
The experimental results demonstrate that the JLEF-feature and AJLEF-face outperform other related approaches for undersampled face recognition under varying illumination.
arXiv Detail & Related papers (2020-04-07T06:48:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.