SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces
- URL: http://arxiv.org/abs/2501.09756v1
- Date: Thu, 16 Jan 2025 18:59:48 GMT
- Title: SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces
- Authors: Sumit Chaturvedi, Mengwei Ren, Yannick Hold-Geoffroy, Jingyuan Liu, Julie Dorsey, Zhixin Shu,
- Abstract summary: We introduce SynthLight, a diffusion model for portrait relighting.
Our approach frames image relighting as a re-rendering problem, where pixels are transformed in response to changes in environmental lighting conditions.
We synthesize a dataset to simulate this lighting-conditioned transformation with 3D head assets under varying lighting.
- Score: 16.65498750779018
- License:
- Abstract: We introduce SynthLight, a diffusion model for portrait relighting. Our approach frames image relighting as a re-rendering problem, where pixels are transformed in response to changes in environmental lighting conditions. Using a physically-based rendering engine, we synthesize a dataset to simulate this lighting-conditioned transformation with 3D head assets under varying lighting. We propose two training and inference strategies to bridge the gap between the synthetic and real image domains: (1) multi-task training that takes advantage of real human portraits without lighting labels; (2) an inference time diffusion sampling procedure based on classifier-free guidance that leverages the input portrait to better preserve details. Our method generalizes to diverse real photographs and produces realistic illumination effects, including specular highlights and cast shadows, while preserving the subject's identity. Our quantitative experiments on Light Stage data demonstrate results comparable to state-of-the-art relighting methods. Our qualitative results on in-the-wild images showcase rich and unprecedented illumination effects. Project Page: \url{https://vrroom.github.io/synthlight/}
Related papers
- Real-time 3D-aware Portrait Video Relighting [89.41078798641732]
We present the first real-time 3D-aware method for relighting in-the-wild videos of talking faces based on Neural Radiance Fields (NeRF)
We infer an albedo tri-plane, as well as a shading tri-plane based on a desired lighting condition for each video frame with fast dual-encoders.
Our method runs at 32.98 fps on consumer-level hardware and achieves state-of-the-art results in terms of reconstruction quality, lighting error, lighting instability, temporal consistency and inference speed.
arXiv Detail & Related papers (2024-10-24T01:34:11Z) - Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - Neural Gaffer: Relighting Any Object via Diffusion [43.87941408722868]
We propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer.
Our model takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel lighting condition.
We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy.
arXiv Detail & Related papers (2024-06-11T17:50:15Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - DiFaReli++: Diffusion Face Relighting with Consistent Cast Shadows [11.566896201650056]
We introduce a novel approach to single-view face relighting in the wild, addressing challenges such as global illumination and cast shadows.
We propose a single-shot relighting framework that requires just one network pass, given pre-processed data, and even outperforms the teacher model across all metrics.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - LightPainter: Interactive Portrait Relighting with Freehand Scribble [79.95574780974103]
We introduce LightPainter, a scribble-based relighting system that allows users to interactively manipulate portrait lighting effect with ease.
To train the relighting module, we propose a novel scribble simulation procedure to mimic real user scribbles.
We demonstrate high-quality and flexible portrait lighting editing capability with both quantitative and qualitative experiments.
arXiv Detail & Related papers (2023-03-22T23:17:11Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Deep Portrait Lighting Enhancement with 3D Guidance [24.01582513386902]
We present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance.
Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-08-04T15:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.