Lighting in Motion: Spatiotemporal HDR Lighting Estimation
- URL: http://arxiv.org/abs/2512.13597v1
- Date: Mon, 15 Dec 2025 17:49:22 GMT
- Title: Lighting in Motion: Spatiotemporal HDR Lighting Estimation
- Authors: Christophe Bolduc, Julien Philip, Li Ma, Mingming He, Paul Debevec, Jean-François Lalonde,
- Abstract summary: We present Lighting in Motion (LiMo), a diffusion-based approach to lighting estimation.<n>LiMo is both realistic high-frequency prediction and accurate illuminance estimation.
- Score: 17.395631978283657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Lighting in Motion (LiMo), a diffusion-based approach to spatiotemporal lighting estimation. LiMo targets both realistic high-frequency detail prediction and accurate illuminance estimation. To account for both, we propose generating a set of mirrored and diffuse spheres at different exposures, based on their 3D positions in the input. Making use of diffusion priors, we fine-tune powerful existing diffusion models on a large-scale customized dataset of indoor and outdoor scenes, paired with spatiotemporal light probes. For accurate spatial conditioning, we demonstrate that depth alone is insufficient and we introduce a new geometric condition to provide the relative position of the scene to the target 3D position. Finally, we combine diffuse and mirror predictions at different exposures into a single HDRI map leveraging differentiable rendering. We thoroughly evaluate our method and design choices to establish LiMo as state-of-the-art for both spatial control and prediction accuracy.
Related papers
- Spatiotemporally Consistent Indoor Lighting Estimation with Diffusion Priors [18.794530505630227]
Lighting estimation from a single image or video remains a challenge due to its highly ill-posed nature.<n>We propose a method that estimates from an input video describing a continuous light field describing lighting of the scene.<n>Results on consistent lighting estimation from in-the-wild videos, which is rarely demonstrated in previous works.
arXiv Detail & Related papers (2025-08-11T18:11:42Z) - Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion [45.81230812844384]
We present a novel framework that boosts intrinsic estimation by leveraging auxiliary multi-lighting conditions from 2D diffusion priors.<n>We train a large G-buffer model with a U-Net backbone to accurately predict surface normals and materials.
arXiv Detail & Related papers (2024-12-12T18:58:09Z) - A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis [6.883971329818549]
We introduce a method to create relightable radiance fields using single-illumination data.
We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction.
We show results on synthetic and real multi-view data under single illumination.
arXiv Detail & Related papers (2024-09-13T16:07:25Z) - LightOctree: Lightweight 3D Spatially-Coherent Indoor Lighting Estimation [4.079873017864992]
We present a lightweight solution for estimating spatially-coherent indoor lighting from a single RGB image.
We introduce a unified, voxel octree-based illumination estimation framework to produce 3D spatially-coherent lighting.
arXiv Detail & Related papers (2024-04-05T07:15:06Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - EMLight: Lighting Estimation via Spherical Distribution Approximation [33.26530733479459]
We propose an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation.
We decompose the illumination map into spherical light distribution, light intensity and the ambient term.
Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.
arXiv Detail & Related papers (2020-12-21T04:54:08Z) - Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination [84.00096195633793]
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
arXiv Detail & Related papers (2020-03-18T17:46:30Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.