DreamLight: Towards Harmonious and Consistent Image Relighting
- URL: http://arxiv.org/abs/2506.14549v1
- Date: Tue, 17 Jun 2025 14:05:24 GMT
- Title: DreamLight: Towards Harmonious and Consistent Image Relighting
- Authors: Yong Liu, Wenpeng Xiao, Qianqian Wang, Junlin Chen, Shiyin Wang, Yitong Wang, Xinglong Wu, Yansong Tang,
- Abstract summary: We introduce a model named DreamLight for universal image relighting.<n>It can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone.
- Score: 41.90032795389507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a model named DreamLight for universal image relighting in this work, which can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone. The background can be specified by natural images (image-based relighting) or generated from unlimited text prompts (text-based relighting). Existing studies primarily focus on image-based relighting, while with scant exploration into text-based scenarios. Some works employ intricate disentanglement pipeline designs relying on environment maps to provide relevant information, which grapples with the expensive data cost required for intrinsic decomposition and light source. Other methods take this task as an image translation problem and perform pixel-level transformation with autoencoder architecture. While these methods have achieved decent harmonization effects, they struggle to generate realistic and natural light interaction effects between the foreground and background. To alleviate these challenges, we reorganize the input data into a unified format and leverage the semantic prior provided by the pretrained diffusion model to facilitate the generation of natural results. Moreover, we propose a Position-Guided Light Adapter (PGLA) that condenses light information from different directions in the background into designed light query embeddings, and modulates the foreground with direction-biased masked attention. In addition, we present a post-processing module named Spectral Foreground Fixer (SFF) to adaptively reorganize different frequency components of subject and relighted background, which helps enhance the consistency of foreground appearance. Extensive comparisons and user study demonstrate that our DreamLight achieves remarkable relighting performance.
Related papers
- Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models [56.84206059390887]
We propose textbfLightD, a novel framework that generates natural adversarial samples for vision-and-language pretraining models.<n>LightD expands the optimization space while ensuring perturbations align with scene semantics.
arXiv Detail & Related papers (2025-05-30T05:30:02Z) - LightLab: Controlling Light Sources in Images with Diffusion Models [49.83835236202516]
We present a diffusion-based method for fine-grained, parametric control over light sources in an image.<n>We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination.<n>We show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.
arXiv Detail & Related papers (2025-05-14T17:57:27Z) - TSCnet: A Text-driven Semantic-level Controllable Framework for Customized Low-Light Image Enhancement [30.498816319802412]
We propose a new light enhancement task and a new framework that provides customized lighting control through prompt-driven, semantic-level, and quantitative brightness adjustments.<n> Experimental results on benchmark datasets demonstrate our framework's superior performance at increasing visibility, maintaining natural color balance, and amplifying fine details without creating artifacts.
arXiv Detail & Related papers (2025-03-11T08:30:50Z) - Text2Relight: Creative Portrait Relighting with Text Guidance [26.75526739002697]
We present a lighting-aware image editing pipeline that, given a portrait image and a text prompt, performs single image relighting.<n>Our model modifies the lighting and color of both the foreground and background to align with the provided text description.
arXiv Detail & Related papers (2024-12-18T11:12:10Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization [10.655037947250516]
Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background.
We decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination.
arXiv Detail & Related papers (2021-12-02T15:18:29Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.