TransLight: Image-Guided Customized Lighting Control with Generative Decoupling
- URL: http://arxiv.org/abs/2508.14814v1
- Date: Wed, 20 Aug 2025 16:05:12 GMT
- Title: TransLight: Image-Guided Customized Lighting Control with Generative Decoupling
- Authors: Zongming Li, Lianghui Zhu, Haocheng Shen, Longjin Ran, Wenyu Liu, Xinggang Wang,
- Abstract summary: We present TransLight, a novel framework that enables high-fidelity and high-freedom transfer of light effects.<n>We first present Generative Decoupling, where two fine-tuned diffusion models are used to accurately separate image content and light effects.<n>We then employ IC-Light as the generative model and train our model with our triplets, injecting the reference lighting image as an additional conditioning signal.
- Score: 31.587782425861363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing illumination-editing approaches fail to simultaneously provide customized control of light effects and preserve content integrity. This makes them less effective for practical lighting stylization requirements, especially in the challenging task of transferring complex light effects from a reference image to a user-specified target image. To address this problem, we propose TransLight, a novel framework that enables high-fidelity and high-freedom transfer of light effects. Extracting the light effect from the reference image is the most critical and challenging step in our method. The difficulty lies in the complex geometric structure features embedded in light effects that are highly coupled with content in real-world scenarios. To achieve this, we first present Generative Decoupling, where two fine-tuned diffusion models are used to accurately separate image content and light effects, generating a newly curated, million-scale dataset of image-content-light triplets. Then, we employ IC-Light as the generative model and train our model with our triplets, injecting the reference lighting image as an additional conditioning signal. The resulting TransLight model enables customized and natural transfer of diverse light effects. Notably, by thoroughly disentangling light effects from reference images, our generative decoupling strategy endows TransLight with highly flexible illumination control. Experimental results establish TransLight as the first method to successfully transfer light effects across disparate images, delivering more customized illumination control than existing techniques and charting new directions for research in illumination harmonization and editing.
Related papers
- Light-X: Generative 4D Video Rendering with Camera and Illumination Control [52.87059646145144]
Light-X is a video generation framework that enables controllable rendering from monocular videos with both viewpoint and illumination control.<n>To address the lack of paired multi-view and multi-illumination videos, we introduce Light-Syn, a degradation-based pipeline with inverse-mapping.
arXiv Detail & Related papers (2025-12-04T18:59:57Z) - LightQANet: Quantized and Adaptive Feature Learning for Low-Light Image Enhancement [65.06462316546806]
Low-light image enhancement aims to improve illumination while preserving high-quality color and texture.<n>Existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions.<n>We propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement.
arXiv Detail & Related papers (2025-10-16T14:54:42Z) - PractiLight: Practical Light Control Using Foundational Diffusion Models [78.75949075070595]
PractiLight is a practical approach to light control in generated images.<n>Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers.<n>We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency.
arXiv Detail & Related papers (2025-09-01T23:38:40Z) - SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement [58.79901582809091]
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>We present a Spatially-Adaptive Illumination-Guided Transformer framework that enables accurate illumination restoration.
arXiv Detail & Related papers (2025-07-21T11:38:56Z) - DreamLight: Towards Harmonious and Consistent Image Relighting [41.90032795389507]
We introduce a model named DreamLight for universal image relighting.<n>It can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone.
arXiv Detail & Related papers (2025-06-17T14:05:24Z) - LightLab: Controlling Light Sources in Images with Diffusion Models [49.83835236202516]
We present a diffusion-based method for fine-grained, parametric control over light sources in an image.<n>We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination.<n>We show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.
arXiv Detail & Related papers (2025-05-14T17:57:27Z) - Light-A-Video: Training-free Video Relighting via Progressive Light Fusion [52.420894727186216]
Light-A-Video is a training-free approach to achieve temporally smooth video relighting.<n>Adapted from image relighting models, Light-A-Video introduces two key techniques to enhance lighting consistency.
arXiv Detail & Related papers (2025-02-12T17:24:19Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering [6.820642721852439]
We present GI-GS, a novel inverse rendering framework that leverages 3D Gaussian Splatting (3DGS) and deferred shading.<n>In our framework, we first render a G-buffer to capture the detailed geometry and material properties of the scene.<n>With the G-buffer and previous rendering results, the indirect lighting can be calculated through a lightweight path tracing.
arXiv Detail & Related papers (2024-10-03T15:58:18Z) - LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Learning to Adapt to Light [14.919947487248653]
We propose a biologically inspired method to handle light-related image-enhancement tasks with a unified network (called LA-Net)
A new module is built inspired by biological visual adaptation to achieve unified light adaptation in the low-frequency pathway.
Experiments on three tasks -- low-light enhancement, exposure correction, and tone mapping -- demonstrate that the proposed method almost obtains state-of-the-art performance.
arXiv Detail & Related papers (2022-02-16T14:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.