Controllable Weather Synthesis and Removal with Video Diffusion Models
- URL: http://arxiv.org/abs/2505.00704v1
- Date: Thu, 01 May 2025 17:59:57 GMT
- Title: Controllable Weather Synthesis and Removal with Video Diffusion Models
- Authors: Chih-Hao Lin, Zian Wang, Ruofan Liang, Yuxuan Zhang, Sanja Fidler, Shenlong Wang, Zan Gojcic,
- Abstract summary: WeatherWeaver is a video diffusion model that synthesizes diverse weather effects directly into any input video.<n>Our model provides precise control over weather effect intensity and supports blending various weather types, ensuring both realism and adaptability.
- Score: 61.56193902622901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating realistic and controllable weather effects in videos is valuable for many applications. Physics-based weather simulation requires precise reconstructions that are hard to scale to in-the-wild videos, while current video editing often lacks realism and control. In this work, we introduce WeatherWeaver, a video diffusion model that synthesizes diverse weather effects -- including rain, snow, fog, and clouds -- directly into any input video without the need for 3D modeling. Our model provides precise control over weather effect intensity and supports blending various weather types, ensuring both realism and adaptability. To overcome the scarcity of paired training data, we propose a novel data strategy combining synthetic videos, generative image editing, and auto-labeled real-world videos. Extensive evaluations show that our method outperforms state-of-the-art methods in weather simulation and removal, providing high-quality, physically plausible, and scene-identity-preserving results over various real-world videos.
Related papers
- Removing Multiple Hybrid Adverse Weather in Video via a Unified Model [6.868658821057831]
We propose a novel unified model, dubbed UniWRV, to remove multiple heterogeneous video weather degradations in an all-in-one fashion.<n>Our UniWRV exhibits robust and superior adaptation capability in multiple heterogeneous degradations learning scenarios.
arXiv Detail & Related papers (2025-03-08T13:01:22Z) - SimVS: Simulating World Inconsistencies for Robust View Synthesis [102.83898965828621]
We present an approach for leveraging generative video models to simulate the inconsistencies in the world that can occur during capture.<n>We demonstrate that our world-simulation strategy significantly outperforms traditional augmentation methods in handling real-world scene variations.
arXiv Detail & Related papers (2024-12-10T17:35:12Z) - Multiple weather images restoration using the task transformer and adaptive mixup strategy [14.986500375481546]
We introduce a novel multi-task severe weather removal model that can effectively handle complex weather conditions in an adaptive manner.
Our model incorporates a weather task sequence generator, enabling the self-attention mechanism to selectively focus on features specific to different weather types.
Our proposed model has achieved state-of-the-art performance on the publicly available dataset.
arXiv Detail & Related papers (2024-09-05T04:55:40Z) - Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal [53.15046196592023]
We introduce test-time adaptation into adverse weather removal in videos.
We propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process.
arXiv Detail & Related papers (2024-03-12T14:21:30Z) - Video Adverse-Weather-Component Suppression Network via Weather
Messenger and Adversarial Backpropagation [45.184188689391775]
We propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net)
Our ViWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition.
arXiv Detail & Related papers (2023-09-24T17:13:55Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field [57.859851662796316]
We describe a novel NeRF-editing procedure that can fuse physical simulations with NeRF models of scenes.
Results are significantly more realistic than those from SOTA 2D image editing and SOTA 3D NeRF stylization.
arXiv Detail & Related papers (2022-11-23T18:59:13Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.