Deep Dynamic Cloud Lighting
- URL: http://arxiv.org/abs/2304.09317v1
- Date: Tue, 18 Apr 2023 22:02:54 GMT
- Title: Deep Dynamic Cloud Lighting
- Authors: Pinar Satilmis, Thomas Bashford-Rogers
- Abstract summary: We propose a solution which enables whole-sky dynamic cloud for the first time.
We synthesise a multi-timescale sky appearance model which learns to predict the sky illumination over various timescales.
- Score: 3.4442294678697385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sky illumination is a core source of lighting in rendering, and a substantial
amount of work has been developed to simulate lighting from clear skies.
However, in reality, clouds substantially alter the appearance of the sky and
subsequently change the scene's illumination. While there have been recent
advances in developing sky models which include clouds, these all neglect cloud
movement which is a crucial component of cloudy sky appearance. In any sort of
video or interactive environment, it can be expected that clouds will move,
sometimes quite substantially in a short period of time. Our work proposes a
solution to this which enables whole-sky dynamic cloud synthesis for the first
time. We achieve this by proposing a multi-timescale sky appearance model which
learns to predict the sky illumination over various timescales, and can be used
to add dynamism to previous static, cloudy sky lighting approaches.
Related papers
- Towards Physically-Based Sky-Modeling For Image Based Lighting [0.0]
Environment maps are a key component for rendering photorealistic outdoor scenes with coherent illumination.<n>Recent works have extended sky-models to be more comprehensive and inclusive of cloud formations but, as we demonstrate, existing methods fall short in faithfully recreating natural skies.<n>We propose AllSky, a flexible all-weather sky-model learned directly from physically captured HDRI.
arXiv Detail & Related papers (2025-12-15T16:44:38Z) - Light-X: Generative 4D Video Rendering with Camera and Illumination Control [52.87059646145144]
Light-X is a video generation framework that enables controllable rendering from monocular videos with both viewpoint and illumination control.<n>To address the lack of paired multi-view and multi-illumination videos, we introduce Light-Syn, a degradation-based pipeline with inverse-mapping.
arXiv Detail & Related papers (2025-12-04T18:59:57Z) - Controllable Weather Synthesis and Removal with Video Diffusion Models [61.56193902622901]
WeatherWeaver is a video diffusion model that synthesizes diverse weather effects directly into any input video.
Our model provides precise control over weather effect intensity and supports blending various weather types, ensuring both realism and adaptability.
arXiv Detail & Related papers (2025-05-01T17:59:57Z) - Towards Physically-Based Sky-Modeling [0.0]
We propose an all-weather sky-model, learning weatheredkies directly from physically captured HDR imagery.
Our model (AllSky) allows for emulation of physically captured environment maps with improved retention of the Extended Dynamic Range (EDR) of the sky.
arXiv Detail & Related papers (2024-12-16T15:32:05Z) - Precise Forecasting of Sky Images Using Spatial Warping [12.042758147684822]
We introduce a deep learning method to predict a future sky image frame with higher resolution than previous methods.
Our main contribution is to derive an optimal warping method to counter the adverse affects of clouds at the horizon.
arXiv Detail & Related papers (2024-09-18T17:25:42Z) - IDF-CR: Iterative Diffusion Process for Divide-and-Conquer Cloud Removal in Remote-sensing Images [55.40601468843028]
We present an iterative diffusion process for cloud removal (IDF-CR)
IDF-CR is divided into two-stage models that address pixel space and latent space.
In the latent space stage, the diffusion model transforms low-quality cloud removal into high-quality clean output.
arXiv Detail & Related papers (2024-03-18T15:23:48Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - The Sky's the Limit: Re-lightable Outdoor Scenes via a Sky-pixel Constrained Illumination Prior and Outside-In Visibility [18.46907109338604]
Inverse rendering of outdoor scenes from unconstrained image collections is a challenging task.
We exploit the fact that any sky pixel provides a direct observation of distant lighting.
Our method estimates high-quality albedo, geometry, illumination and sky visibility.
arXiv Detail & Related papers (2023-11-28T16:39:49Z) - Masked Spatio-Temporal Structure Prediction for Self-supervised Learning
on Point Cloud Videos [75.9251839023226]
We propose a Masked-temporal Structure Prediction (MaST-Pre) method to capture the structure of point cloud videos without human annotations.
MaST-Pre consists of two self-supervised learning tasks. First, by reconstructing masked point tubes, our method is able to capture appearance information of point cloud videos.
Second, to learn motion, we propose a temporal cardinality difference prediction task that estimates the change in the number of points within a point tube.
arXiv Detail & Related papers (2023-08-18T02:12:54Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical
Satellite Time Series [19.32220113046804]
We introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture.
We show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.
arXiv Detail & Related papers (2023-04-11T19:27:18Z) - Boosting Point Clouds Rendering via Radiance Mapping [49.24193509772339]
We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
arXiv Detail & Related papers (2022-10-27T01:25:57Z) - Generating the Cloud Motion Winds Field from Satellite Cloud Imagery
Using Deep Learning Approach [1.8655840060559172]
We explore the cloud motion winds algorithm based on data-driven deep learning approach.
We use deep learning model to automatically learn the motion feature representations and directly output the field of cloud motion winds.
We also try to use a single cloud imagery to predict the cloud motion winds field in a fixed region, which is impossible to achieve using traditional algorithms.
arXiv Detail & Related papers (2020-10-03T05:40:36Z) - Thick Cloud Removal of Remote Sensing Images Using Temporal Smoothness
and Sparsity-Regularized Tensor Optimization [3.65794756599491]
In remote sensing images, the presence of thick cloud accompanying cloud shadow is a high probability event.
A novel thick cloud removal method for remote sensing images based on temporal smoothness and sparsity-regularized tensor optimization is proposed.
arXiv Detail & Related papers (2020-08-11T05:59:20Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.