LumiCtrl : Learning Illuminant Prompts for Lighting Control in Personalized Text-to-Image Models
- URL: http://arxiv.org/abs/2512.17489v1
- Date: Fri, 19 Dec 2025 11:59:47 GMT
- Title: LumiCtrl : Learning Illuminant Prompts for Lighting Control in Personalized Text-to-Image Models
- Authors: Muhammad Atif Butt, Kai Wang, Javier Vazquez-Corral, Joost Van De Weijer,
- Abstract summary: We present an illuminant personalization method named LumiCtrl that learns an illuminant prompt given a single image of an object.<n>LumiCtrl consists of three basic components: given an image of the object, our method applies physics-based illuminant augmentation along the Planckian locus to create fine-tuning variants under standard illuminants.<n>The results show that our method achieves significantly better illuminant fidelity, aesthetic quality, and scene coherence compared to existing personalization baselines.
- Score: 29.41857508306698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current text-to-image (T2I) models have demonstrated remarkable progress in creative image generation, yet they still lack precise control over scene illuminants, which is a crucial factor for content designers aiming to manipulate the mood, atmosphere, and visual aesthetics of generated images. In this paper, we present an illuminant personalization method named LumiCtrl that learns an illuminant prompt given a single image of an object. LumiCtrl consists of three basic components: given an image of the object, our method applies (a) physics-based illuminant augmentation along the Planckian locus to create fine-tuning variants under standard illuminants; (b) edge-guided prompt disentanglement using a frozen ControlNet to ensure prompts focus on illumination rather than structure; and (c) a masked reconstruction loss that focuses learning on the foreground object while allowing the background to adapt contextually, enabling what we call contextual light adaptation. We qualitatively and quantitatively compare LumiCtrl against other T2I customization methods. The results show that our method achieves significantly better illuminant fidelity, aesthetic quality, and scene coherence compared to existing personalization baselines. A human preference study further confirms strong user preference for LumiCtrl outputs. The code and data will be released upon publication.
Related papers
- LightSwitch: Multi-view Relighting with Material-guided Diffusion [73.5965603000002]
LightSwitch is a novel finetuned material-relighting diffusion framework.<n>We show that our 2D relighting prediction quality exceeds previous state-of-the-art relighting priors that directly relight from images.
arXiv Detail & Related papers (2025-08-08T17:59:52Z) - SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement [58.79901582809091]
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>We present a Spatially-Adaptive Illumination-Guided Transformer framework that enables accurate illumination restoration.
arXiv Detail & Related papers (2025-07-21T11:38:56Z) - DreamLight: Towards Harmonious and Consistent Image Relighting [41.90032795389507]
We introduce a model named DreamLight for universal image relighting.<n>It can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone.
arXiv Detail & Related papers (2025-06-17T14:05:24Z) - LightLab: Controlling Light Sources in Images with Diffusion Models [49.83835236202516]
We present a diffusion-based method for fine-grained, parametric control over light sources in an image.<n>We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination.<n>We show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.
arXiv Detail & Related papers (2025-05-14T17:57:27Z) - LumiNet: Latent Intrinsics Meets Diffusion Models for Indoor Scene Relighting [26.40653597095593]
Given a source image and a target lighting image, LumiNet synthesizes a relit version of the source scene that captures the target's lighting.<n>LumiNet processes latent representations from two different images - preserving geometry and albedo from the source while transferring lighting characteristics from the target.
arXiv Detail & Related papers (2024-11-29T18:59:11Z) - LumiSculpt: Enabling Consistent Portrait Lighting in Video Generation [87.95655555555264]
Lighting plays a pivotal role in ensuring the naturalness and aesthetic quality of video generation.<n>LumiSculpt enables precise and consistent lighting control in T2V generation models.<n>LumiHuman is a new dataset for portrait lighting of images and videos.
arXiv Detail & Related papers (2024-10-30T12:44:08Z) - MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting [63.5925701087252]
Out-of-distribution (OOD) 3D relighting requires novel view synthesis under unseen lighting conditions.<n>We introduce MetaGS to tackle this challenge from two perspectives.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of Possibly Glossy Objects [58.25772313290338]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.<n>It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.