LightSwitch: Multi-view Relighting with Material-guided Diffusion
- URL: http://arxiv.org/abs/2508.06494v1
- Date: Fri, 08 Aug 2025 17:59:52 GMT
- Title: LightSwitch: Multi-view Relighting with Material-guided Diffusion
- Authors: Yehonathan Litman, Fernando De la Torre, Shubham Tulsiani,
- Abstract summary: LightSwitch is a novel finetuned material-relighting diffusion framework.<n>We show that our 2D relighting prediction quality exceeds previous state-of-the-art relighting priors that directly relight from images.
- Score: 73.5965603000002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent approaches for 3D relighting have shown promise in integrating 2D image relighting generative priors to alter the appearance of a 3D representation while preserving the underlying structure. Nevertheless, generative priors used for 2D relighting that directly relight from an input image do not take advantage of intrinsic properties of the subject that can be inferred or cannot consider multi-view data at scale, leading to subpar relighting. In this paper, we propose Lightswitch, a novel finetuned material-relighting diffusion framework that efficiently relights an arbitrary number of input images to a target lighting condition while incorporating cues from inferred intrinsic properties. By using multi-view and material information cues together with a scalable denoising scheme, our method consistently and efficiently relights dense multi-view data of objects with diverse material compositions. We show that our 2D relighting prediction quality exceeds previous state-of-the-art relighting priors that directly relight from images. We further demonstrate that LightSwitch matches or outperforms state-of-the-art diffusion inverse rendering methods in relighting synthetic and real objects in as little as 2 minutes.
Related papers
- Training-Free Multi-View Extension of IC-Light for Textual Position-Aware Scene Relighting [12.481640901722786]
We introduce GS-Light, a pipeline for text-guided relighting of 3D scenes represented via Gaussian Splatting (3DGS)<n> GS-Light implements a training-free extension of a single-input diffusion model to handle multi-view inputs.<n>We evaluate GS-Light on both indoor and outdoor scenes, comparing it to state-of-the-art baselines.
arXiv Detail & Related papers (2025-11-17T18:37:41Z) - RelightMaster: Precise Video Relighting with Multi-plane Light Images [59.56389629981934]
RelightMaster is a novel framework for accurate and controllable video relighting.<n>It generates physically plausible lighting and shadows and preserves original scene content.
arXiv Detail & Related papers (2025-11-09T08:12:09Z) - SViM3D: Stable Video Material Diffusion for Single Image 3D Generation [48.986972061812004]
Video diffusion models have been successfully used to reconstruct 3D objects from a single image efficiently.<n>We extend a latent video diffusion model to output spatially varying PBR parameters and surface normals jointly with each generated view based on explicit camera control.<n>This unique setup allows for relighting and generating a 3D asset using our model as neural prior.
arXiv Detail & Related papers (2025-10-09T14:29:47Z) - IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.<n>Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.<n>We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion [45.81230812844384]
We present a novel framework that boosts intrinsic estimation by leveraging auxiliary multi-lighting conditions from 2D diffusion priors.<n>We train a large G-buffer model with a U-Net backbone to accurately predict surface normals and materials.
arXiv Detail & Related papers (2024-12-12T18:58:09Z) - LumiNet: Latent Intrinsics Meets Diffusion Models for Indoor Scene Relighting [13.433775723052753]
Given a source image and a target lighting image, LumiNet synthesizes a relit version of the source scene that captures the target's lighting.<n>LumiNet processes latent representations from two different images - preserving geometry and albedo from the source while transferring lighting characteristics from the target.
arXiv Detail & Related papers (2024-11-29T18:59:11Z) - A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis [6.883971329818549]
We introduce a method to create relightable radiance fields using single-illumination data.
We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction.
We show results on synthetic and real multi-view data under single illumination.
arXiv Detail & Related papers (2024-09-13T16:07:25Z) - Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of Possibly Glossy Objects [58.25772313290338]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.<n>It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.