Towards Physically-Based Sky-Modeling For Image Based Lighting
- URL: http://arxiv.org/abs/2512.15632v1
- Date: Mon, 15 Dec 2025 16:44:38 GMT
- Title: Towards Physically-Based Sky-Modeling For Image Based Lighting
- Authors: Ian J. Maquignaz,
- Abstract summary: Environment maps are a key component for rendering photorealistic outdoor scenes with coherent illumination.<n>Recent works have extended sky-models to be more comprehensive and inclusive of cloud formations but, as we demonstrate, existing methods fall short in faithfully recreating natural skies.<n>We propose AllSky, a flexible all-weather sky-model learned directly from physically captured HDRI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate environment maps are a key component for rendering photorealistic outdoor scenes with coherent illumination. They enable captivating visual arts, immersive virtual reality, and a wide range of engineering and scientific applications. Recent works have extended sky-models to be more comprehensive and inclusive of cloud formations but, as we demonstrate, existing methods fall short in faithfully recreating natural skies. Though in recent years the visual quality of DNN-generated High Dynamic Range Imagery (HDRI) has greatly improved, the environment maps generated by DNN sky-models do not re-light scenes with the same tones, shadows, and illumination as physically captured HDR imagery. In this work, we demonstrate progress in HDR literature to be tangential to sky-modelling as current works cannot support both photorealism and the 22 f-stops required for the Full Dynamic Range (FDR) of outdoor illumination. We achieve this by proposing AllSky, a flexible all-weather sky-model learned directly from physically captured HDRI which we leverage to study the input modalities, tonemapping, conditioning, and evaluation of sky-models. Per user-controlled positioning of the sun and cloud formations, AllSky expands on current functionality by allowing for intuitive user control over environment maps and achieves state-of-the-art sky-model performance. Through our proposed evaluation, we demonstrate existing DNN sky-models are not interchangeable with physically captured HDRI or parametric sky-models, with current limitations being prohibitive of scalability and accurate illumination in downstream applications
Related papers
- LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models [76.62929629864034]
We introduce StreetCrafter, a controllable video diffusion model that utilizes LiDAR point cloud renderings as pixel-level conditions.<n>In addition, the utilization of pixel-level LiDAR conditions allows us to make accurate pixel-level edits to target scenes.<n>Our model enables flexible control over viewpoint changes, enlarging the view for satisfying rendering regions.
arXiv Detail & Related papers (2024-12-17T18:58:55Z) - Towards Physically-Based Sky-Modeling [0.0]
We propose an all-weather sky-model, learning weatheredkies directly from physically captured HDR imagery.<n>Our model (AllSky) allows for emulation of physically captured environment maps with improved retention of the Extended Dynamic Range (EDR) of the sky.
arXiv Detail & Related papers (2024-12-16T15:32:05Z) - Skyeyes: Ground Roaming using Aerial View Images [9.159470619808127]
We introduce Skyeyes, a novel framework that can generate sequences of ground view images using only aerial view inputs.
More specifically, we combine a 3D representation with a view consistent generation model, which ensures coherence between generated images.
The images maintain improved spatial-temporal coherence and realism, enhancing scene comprehension and visualization from aerial perspectives.
arXiv Detail & Related papers (2024-09-25T07:21:43Z) - NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Deep Dynamic Cloud Lighting [3.4442294678697385]
We propose a solution which enables whole-sky dynamic cloud for the first time.
We synthesise a multi-timescale sky appearance model which learns to predict the sky illumination over various timescales.
arXiv Detail & Related papers (2023-04-18T22:02:54Z) - ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field [57.859851662796316]
We describe a novel NeRF-editing procedure that can fuse physical simulations with NeRF models of scenes.
Results are significantly more realistic than those from SOTA 2D image editing and SOTA 3D NeRF stylization.
arXiv Detail & Related papers (2022-11-23T18:59:13Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient
Objects and Shadow Modeling Using RPC Cameras [10.269997499911668]
We introduce the Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end model for learning multi-view satellite photogram in the wild.
Sat-NeRF combines some of the latest trends in neural rendering with native satellite camera models.
We evaluate Sat-NeRF using WorldView-3 images from different locations and stress the advantages of applying a bundle adjustment to the satellite camera models prior to training.
arXiv Detail & Related papers (2022-03-16T19:18:46Z) - Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos [14.6001438297068]
This paper proposes a vision-based method for video sky replacement and harmonization.
We decompose this artistic creation process into a couple of proxy tasks including sky matting, motion estimation, and image blending.
Experiments are conducted on videos diversely captured in the wild by handheld smartphones and dash cameras.
arXiv Detail & Related papers (2020-10-22T15:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.