PI-Light: Physics-Inspired Diffusion for Full-Image Relighting
- URL: http://arxiv.org/abs/2601.22135v1
- Date: Thu, 29 Jan 2026 18:55:36 GMT
- Title: PI-Light: Physics-Inspired Diffusion for Full-Image Relighting
- Authors: Zhexin Liang, Zhaoxi Chen, Yongwei Chen, Tianyi Wei, Tengfei Wang, Xingang Pan,
- Abstract summary: We introduce Physics-Inspired diffusion for full-image reLight ($$-Light, or PI-Light), a two-stage framework that leverages physics-inspired diffusion models.<n>Our design incorporates (i) batch-aware attention, (ii) a physics-guided neural rendering module that enforces physically plausible light transport, and (iii) physics-inspired losses that regularize training dynamics toward a physically meaningful landscape.<n>Experiments demonstrate that $$-Light synthesizes specular highlights and diffuse reflections across a wide variety of materials, achieving superior generalization to real-world scenes compared with prior approaches.
- Score: 26.42056487076843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Full-image relighting remains a challenging problem due to the difficulty of collecting large-scale structured paired data, the difficulty of maintaining physical plausibility, and the limited generalizability imposed by data-driven priors. Existing attempts to bridge the synthetic-to-real gap for full-scene relighting remain suboptimal. To tackle these challenges, we introduce Physics-Inspired diffusion for full-image reLight ($π$-Light, or PI-Light), a two-stage framework that leverages physics-inspired diffusion models. Our design incorporates (i) batch-aware attention, which improves the consistency of intrinsic predictions across a collection of images, (ii) a physics-guided neural rendering module that enforces physically plausible light transport, (iii) physics-inspired losses that regularize training dynamics toward a physically meaningful landscape, thereby enhancing generalizability to real-world image editing, and (iv) a carefully curated dataset of diverse objects and scenes captured under controlled lighting conditions. Together, these components enable efficient finetuning of pretrained diffusion models while also providing a solid benchmark for downstream evaluation. Experiments demonstrate that $π$-Light synthesizes specular highlights and diffuse reflections across a wide variety of materials, achieving superior generalization to real-world scenes compared with prior approaches.
Related papers
- Learning to Remove Lens Flare in Event Camera [56.9171469873838]
We present E-DeflareDeflare, the first framework for removing lens flare from event camera data.<n>We first establish the theoretical foundation by deriving a physics-grounded forward model of the non-linear suppression mechanism.<n> Empowered by this benchmark, we design E-DeflareNet, which achieves state-of-the-art restoration performance.
arXiv Detail & Related papers (2025-12-09T18:59:57Z) - Edit2Perceive: Image Editing Diffusion Models Are Strong Dense Perceivers [55.15722080205737]
Edit2Perceive is a unified diffusion framework that adapts editing models for depth, normal, and matting.<n>Our single-step deterministic inference yields up to faster runtime while training on relatively small datasets.
arXiv Detail & Related papers (2025-11-24T01:13:51Z) - MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference [83.38607296779423]
We show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting.<n>Our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
arXiv Detail & Related papers (2025-10-13T13:29:20Z) - PractiLight: Practical Light Control Using Foundational Diffusion Models [78.75949075070595]
PractiLight is a practical approach to light control in generated images.<n>Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers.<n>We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency.
arXiv Detail & Related papers (2025-09-01T23:38:40Z) - UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting [85.27994475113056]
We introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass.<n>Our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.
arXiv Detail & Related papers (2025-06-18T17:56:45Z) - MV-CoLight: Efficient Object Compositing with Consistent Lighting and Shadow Generation [19.46962637673285]
MV-CoLight is a framework for illumination-consistent object compositing in 2D and 3D scenes.<n>We employ a Hilbert curve-based mapping to align 2D image inputs with 3D Gaussian scene representations seamlessly.<n> Experiments demonstrate state-of-the-art harmonized results across standard benchmarks and our dataset.
arXiv Detail & Related papers (2025-05-27T17:53:02Z) - Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion [45.81230812844384]
We present a novel framework that boosts intrinsic estimation by leveraging auxiliary multi-lighting conditions from 2D diffusion priors.<n>We train a large G-buffer model with a U-Net backbone to accurately predict surface normals and materials.
arXiv Detail & Related papers (2024-12-12T18:58:09Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.