LightPainter: Interactive Portrait Relighting with Freehand Scribble
- URL: http://arxiv.org/abs/2303.12950v1
- Date: Wed, 22 Mar 2023 23:17:11 GMT
- Title: LightPainter: Interactive Portrait Relighting with Freehand Scribble
- Authors: Yiqun Mei, He Zhang, Xuaner Zhang, Jianming Zhang, Zhixin Shu, Yilin
Wang, Zijun Wei, Shi Yan, HyunJoon Jung, Vishal M. Patel
- Abstract summary: We introduce LightPainter, a scribble-based relighting system that allows users to interactively manipulate portrait lighting effect with ease.
To train the relighting module, we propose a novel scribble simulation procedure to mimic real user scribbles.
We demonstrate high-quality and flexible portrait lighting editing capability with both quantitative and qualitative experiments.
- Score: 79.95574780974103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent portrait relighting methods have achieved realistic results of
portrait lighting effects given a desired lighting representation such as an
environment map. However, these methods are not intuitive for user interaction
and lack precise lighting control. We introduce LightPainter, a scribble-based
relighting system that allows users to interactively manipulate portrait
lighting effect with ease. This is achieved by two conditional neural networks,
a delighting module that recovers geometry and albedo optionally conditioned on
skin tone, and a scribble-based module for relighting. To train the relighting
module, we propose a novel scribble simulation procedure to mimic real user
scribbles, which allows our pipeline to be trained without any human
annotations. We demonstrate high-quality and flexible portrait lighting editing
capability with both quantitative and qualitative experiments. User study
comparisons with commercial lighting editing tools also demonstrate consistent
user preference for our method.
Related papers
- SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces [16.65498750779018]
We introduce SynthLight, a diffusion model for portrait relighting.
Our approach frames image relighting as a re-rendering problem, where pixels are transformed in response to changes in environmental lighting conditions.
We synthesize a dataset to simulate this lighting-conditioned transformation with 3D head assets under varying lighting.
arXiv Detail & Related papers (2025-01-16T18:59:48Z) - Materialist: Physically Based Editing Using Single-Image Inverse Rendering [50.39048790589746]
We present a method combining a learning-based approach with progressive differentiable rendering.
Our method achieves more realistic light material interactions, accurate shadows, and global illumination.
We also propose a method for material transparency editing that operates effectively without requiring full scene geometry.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - SpotLight: Shadow-Guided Object Relighting via Diffusion [13.187597686309225]
We show that precise lighting control can be achieved for object relighting simply by specifying the desired shadows of the object.
Our method, SpotLight, leverages existing neural rendering approaches and controllable relighting results with no additional training.
arXiv Detail & Related papers (2024-11-27T16:06:08Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Editable Indoor Lighting Estimation [6.531546527140474]
We propose a pipeline that estimates a parametric light that is easy to edit and allows renderings with strong shadows.
We show that our approach makes indoor lighting estimation easier to handle by a casual user, while still producing competitive results.
arXiv Detail & Related papers (2022-11-08T00:58:29Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.