Deep Portrait Lighting Enhancement with 3D Guidance
- URL: http://arxiv.org/abs/2108.02121v1
- Date: Wed, 4 Aug 2021 15:49:09 GMT
- Title: Deep Portrait Lighting Enhancement with 3D Guidance
- Authors: Fangzhou Han, Can Wang, Hao Du and Jing Liao
- Abstract summary: We present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance.
Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.
- Score: 24.01582513386902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent breakthroughs in deep learning methods for image lighting
enhancement, they are inferior when applied to portraits because 3D facial
information is ignored in their models. To address this, we present a novel
deep learning framework for portrait lighting enhancement based on 3D facial
guidance. Our framework consists of two stages. In the first stage, corrected
lighting parameters are predicted by a network from the input bad lighting
image, with the assistance of a 3D morphable model and a differentiable
renderer. Given the predicted lighting parameter, the differentiable renderer
renders a face image with corrected shading and texture, which serves as the 3D
guidance for learning image lighting enhancement in the second stage. To better
exploit the long-range correlations between the input and the guidance, in the
second stage, we design an image-to-image translation network with a novel
transformer architecture, which automatically produces a lighting-enhanced
result. Experimental results on the FFHQ dataset and in-the-wild images show
that the proposed method outperforms state-of-the-art methods in terms of both
quantitative metrics and visual quality. We will publish our dataset along with
more results on https://cassiepython.github.io/egsr/index.html.
Related papers
- Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - IllumiNeRF: 3D Relighting without Inverse Rendering [25.642960820693947]
Current methods for relightable view synthesis are based on inverse rendering, and attempt to disentangle the object geometry, materials, and lighting that explain the input images.
We propose a simpler approach: we first relight each input image using an image diffusion model conditioned on lighting and then reconstruct a Neural Radiance Field (NeRF) with these relit images.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image [25.076270175205593]
InstructPix2NeRF enables instructed 3D-aware portrait editing from a single open-world image with human instructions.
At its core lies a conditional latent 3D diffusion process that lifts 2D editing to 3D space by learning the correlation between the paired images' difference and the instructions via triplet data.
arXiv Detail & Related papers (2023-11-06T02:21:11Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.