LightIt: Illumination Modeling and Control for Diffusion Models
- URL: http://arxiv.org/abs/2403.10615v2
- Date: Mon, 25 Mar 2024 09:42:13 GMT
- Title: LightIt: Illumination Modeling and Control for Diffusion Models
- Authors: Peter Kocsis, Julien Philip, Kalyan Sunkavalli, Matthias Nießner, Yannick Hold-Geoffroy,
- Abstract summary: We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
- Score: 61.80461416451116
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce LightIt, a method for explicit illumination control for image generation. Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation such as setting the overall mood or cinematic appearance. To overcome these limitations, we propose to condition the generation on shading and normal maps. We model the lighting with single bounce shading, which includes cast shadows. We first train a shading estimation module to generate a dataset of real-world images and shading pairs. Then, we train a control network using the estimated shading and normals as input. Our method demonstrates high-quality image generation and lighting control in numerous scenes. Additionally, we use our generated dataset to train an identity-preserving relighting model, conditioned on an image and a target shading. Our method is the first that enables the generation of images with controllable, consistent lighting and performs on par with specialized relighting state-of-the-art methods.
Related papers
- Retinex-Diffusion: On Controlling Illumination Conditions in Diffusion Models via Retinex Theory [19.205929427075965]
We conceptualize the diffusion model as a black-box image render and strategically decompose its energy function in alignment with the image formation model.
It generates images with realistic illumination effects, including cast shadow, soft shadow, and inter-reflections.
arXiv Detail & Related papers (2024-07-29T03:15:07Z) - DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation [16.080481761005203]
We present a novel method for exerting fine-grained lighting control during text-driven image generation.
Our key observation is that we only need to guide the diffusion process, hence exact radiance hints are not necessary.
We demonstrate and validate our lighting controlled diffusion model on a variety of text prompts and lighting conditions.
arXiv Detail & Related papers (2024-02-19T08:17:21Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization [10.655037947250516]
Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background.
We decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination.
arXiv Detail & Related papers (2021-12-02T15:18:29Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.