Deep Portrait Delighting
- URL: http://arxiv.org/abs/2203.12088v2
- Date: Thu, 24 Mar 2022 02:05:48 GMT
- Title: Deep Portrait Delighting
- Authors: Joshua Weir, Junhong Zhao, Andrew Chalmers, Taehyun Rhee
- Abstract summary: We present a deep neural network for removing undesirable shading features from an unconstrained portrait image.
Our training scheme incorporates three regularization strategies: masked loss, to emphasize high-frequency shading features; soft-shadow loss, which improves sensitivity to subtle changes in lighting.
Our method demonstrates improved delighting quality and generalization when compared with the state-of-the-art.
- Score: 9.455161801306739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a deep neural network for removing undesirable shading features
from an unconstrained portrait image, recovering the underlying texture. Our
training scheme incorporates three regularization strategies: masked loss, to
emphasize high-frequency shading features; soft-shadow loss, which improves
sensitivity to subtle changes in lighting; and shading-offset estimation, to
supervise separation of shading and texture. Our method demonstrates improved
delighting quality and generalization when compared with the state-of-the-art.
We further demonstrate how our delighting method can enhance the performance of
light-sensitive computer vision tasks such as face relighting and semantic
parsing, allowing them to handle extreme lighting conditions.
Related papers
- Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Controllable Light Diffusion for Portraits [8.931046902694984]
We introduce light diffusion, a novel method to improve lighting in portraits.
Inspired by professional photographers' diffusers and scrims, our method softens lighting given only a single portrait photo.
arXiv Detail & Related papers (2023-05-08T14:46:28Z) - Low-light Image and Video Enhancement via Selective Manipulation of
Chromaticity [1.4680035572775534]
We present a simple yet effective approach for low-light image and video enhancement.
The above adaptivity allows us to avoid the costly step of low-light image decomposition into illumination and reflectance.
Our results on standard lowlight image datasets show the efficacy of our algorithm and its qualitative and quantitative superiority over several state-of-the-art techniques.
arXiv Detail & Related papers (2022-03-09T17:01:28Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Towards High Fidelity Face Relighting with Realistic Shadows [21.09340135707926]
Our method learns to predict the ratio (quotient) image between a source image and the target image with the desired lighting.
During training, our model also learns to accurately modify shadows by using estimated shadow masks.
We demonstrate that our proposed method faithfully maintains the local facial details of the subject and can accurately handle hard shadows.
arXiv Detail & Related papers (2021-04-02T00:28:40Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.