COMPOSE: Comprehensive Portrait Shadow Editing
- URL: http://arxiv.org/abs/2408.13922v1
- Date: Sun, 25 Aug 2024 19:18:18 GMT
- Title: COMPOSE: Comprehensive Portrait Shadow Editing
- Authors: Andrew Hou, Zhixin Shu, Xuaner Zhang, He Zhang, Yannick Hold-Geoffroy, Jae Shin Yoon, Xiaoming Liu,
- Abstract summary: COMPOSE is a novel shadow editing pipeline for human portraits.
It offers precise control over shadow attributes such as shape, intensity, and position.
We have trained models to: (1) predict this light source representation from images, and (2) generate realistic shadows using this representation.
- Score: 25.727386174616868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing portrait relighting methods struggle with precise control over facial shadows, particularly when faced with challenges such as handling hard shadows from directional light sources or adjusting shadows while remaining in harmony with existing lighting conditions. In many situations, completely altering input lighting is undesirable for portrait retouching applications: one may want to preserve some authenticity in the captured environment. Existing shadow editing methods typically restrict their application to just the facial region and often offer limited lighting control options, such as shadow softening or rotation. In this paper, we introduce COMPOSE: a novel shadow editing pipeline for human portraits, offering precise control over shadow attributes such as shape, intensity, and position, all while preserving the original environmental illumination of the portrait. This level of disentanglement and controllability is obtained thanks to a novel decomposition of the environment map representation into ambient light and an editable gaussian dominant light source. COMPOSE is a four-stage pipeline that consists of light estimation and editing, light diffusion, shadow synthesis, and finally shadow editing. We define facial shadows as the result of a dominant light source, encoded using our novel gaussian environment map representation. Utilizing an OLAT dataset, we have trained models to: (1) predict this light source representation from images, and (2) generate realistic shadows using this representation. We also demonstrate comprehensive and intuitive shadow editing with our pipeline. Through extensive quantitative and qualitative evaluations, we have demonstrated the robust capability of our system in shadow editing.
Related papers
- All-frequency Full-body Human Image Relighting [1.529342790344802]
Relighting of human images enables post-photography editing of lighting effects in portraits.
The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading.
We propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies.
arXiv Detail & Related papers (2024-11-01T04:45:48Z) - LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Recasting Regional Lighting for Shadow Removal [41.107191352835315]
In a shadow region, the degradation degree of object textures depends on the local illumination.
We propose a shadow-aware decomposition network to estimate the illumination and reflectance layers of shadow regions.
We then propose a novel bilateral correction network to recast the lighting of shadow regions in the illumination layer.
arXiv Detail & Related papers (2024-02-01T05:08:39Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Controllable Light Diffusion for Portraits [8.931046902694984]
We introduce light diffusion, a novel method to improve lighting in portraits.
Inspired by professional photographers' diffusers and scrims, our method softens lighting given only a single portrait photo.
arXiv Detail & Related papers (2023-05-08T14:46:28Z) - LightPainter: Interactive Portrait Relighting with Freehand Scribble [79.95574780974103]
We introduce LightPainter, a scribble-based relighting system that allows users to interactively manipulate portrait lighting effect with ease.
To train the relighting module, we propose a novel scribble simulation procedure to mimic real user scribbles.
We demonstrate high-quality and flexible portrait lighting editing capability with both quantitative and qualitative experiments.
arXiv Detail & Related papers (2023-03-22T23:17:11Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Towards High Fidelity Face Relighting with Realistic Shadows [21.09340135707926]
Our method learns to predict the ratio (quotient) image between a source image and the target image with the desired lighting.
During training, our model also learns to accurately modify shadows by using estimated shadow masks.
We demonstrate that our proposed method faithfully maintains the local facial details of the subject and can accurately handle hard shadows.
arXiv Detail & Related papers (2021-04-02T00:28:40Z) - Portrait Shadow Manipulation [37.414681268753526]
Casually-taken portrait photographs often suffer from unflattering lighting and shadowing because of suboptimal conditions in the environment.
We present a computational approach that gives casual photographers some of this control, thereby allowing poorly-lit portraits to be relit post-capture in a realistic and easily-controllable way.
Our approach relies on a pair of neural networks---one to remove foreign shadows cast by external objects, and another to soften facial shadows cast by the features of the subject and to add a synthetic fill light to improve the lighting ratio.
arXiv Detail & Related papers (2020-05-18T17:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.