WeatherDiffusion: Weather-Guided Diffusion Model for Forward and Inverse Rendering
- URL: http://arxiv.org/abs/2508.06982v1
- Date: Sat, 09 Aug 2025 13:29:39 GMT
- Title: WeatherDiffusion: Weather-Guided Diffusion Model for Forward and Inverse Rendering
- Authors: Yixin Zhu, Zuoliang Zhu, Miloš Hašan, Jian Yang, Jin Xie, Beibei Wang,
- Abstract summary: WeatherDiffusion is a diffusion-based framework for forward and inverse rendering on autonomous driving scenes.<n>Our method enables authentic estimation of material properties, scene geometry, and lighting, and further supports controllable weather and illumination editing.
- Score: 40.94600501568197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forward and inverse rendering have emerged as key techniques for enabling understanding and reconstruction in the context of autonomous driving (AD). However, complex weather and illumination pose great challenges to this task. The emergence of large diffusion models has shown promise in achieving reasonable results through learning from 2D priors, but these models are difficult to control and lack robustness. In this paper, we introduce WeatherDiffusion, a diffusion-based framework for forward and inverse rendering on AD scenes with various weather and lighting conditions. Our method enables authentic estimation of material properties, scene geometry, and lighting, and further supports controllable weather and illumination editing through the use of predicted intrinsic maps guided by text descriptions. We observe that different intrinsic maps should correspond to different regions of the original image. Based on this observation, we propose Intrinsic map-aware attention (MAA) to enable high-quality inverse rendering. Additionally, we introduce a synthetic dataset (\ie WeatherSynthetic) and a real-world dataset (\ie WeatherReal) for forward and inverse rendering on AD scenes with diverse weather and lighting. Extensive experiments show that our WeatherDiffusion outperforms state-of-the-art methods on several benchmarks. Moreover, our method demonstrates significant value in downstream tasks for AD, enhancing the robustness of object detection and image segmentation in challenging weather scenarios.
Related papers
- SemOD: Semantic Enabled Object Detection Network under Various Weather Conditions [1.5278471408515728]
We introduce a semantic-enabled network for object detection in diverse weather conditions.<n>In our analysis, semantics information can enable the model to generate plausible content for missing areas.<n>Our method pioneers the use of semantic data for all-weather transformations, resulting in an increase between 1.47% to 8.80% in mAP.
arXiv Detail & Related papers (2025-11-27T06:19:30Z) - RoSe: Robust Self-supervised Stereo Matching under Adverse Weather Conditions [58.37558408672509]
We propose a robust self-supervised training paradigm, consisting of two key steps: robust self-supervised scene correspondence learning and adverse weather distillation.<n>Experiments demonstrate the effectiveness and versatility of our proposed solution, which outperforms existing state-of-the-art self-supervised methods.
arXiv Detail & Related papers (2025-09-23T15:41:40Z) - DA2Diff: Exploring Degradation-aware Adaptive Diffusion Priors for All-in-One Weather Restoration [32.16602874389847]
We propose an innovative diffusion paradigm with degradation-aware adaptive priors for all-in-one weather restoration, termed DA2Diff.<n>We deploy a set of learnable prompts to capture degradation-aware representations by the prompt-image similarity constraints in the CLIP space.<n>We propose a dynamic expert selection modulator that employs a dynamic weather-aware router to flexibly dispatch varying numbers of restoration experts for each weather-distorted image.
arXiv Detail & Related papers (2025-04-07T14:38:57Z) - ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions [1.7537812081430004]
We propose a technique called ContextualFusion to incorporate the domain knowledge about cameras and lidars behaving differently across lighting and weather variations into 3D object detection models.
Our approach yields an mAP improvement of 6.2% over state-of-the-art methods on our context-balanced synthetic dataset.
Our method enhances state-of-the-art 3D objection performance at night on the real-world NuScenes dataset with a significant mAP improvement of 11.7%.
arXiv Detail & Related papers (2024-04-23T06:37:54Z) - WeatherProof: Leveraging Language Guidance for Semantic Segmentation in Adverse Weather [8.902960772665482]
We propose a method to infer semantic segmentation maps from images captured under adverse weather conditions.
We begin by examining existing models on images degraded by weather conditions such as rain, fog, or snow.
We propose WeatherProof, the first semantic segmentation dataset with accurate clear and adverse weather image pairs.
arXiv Detail & Related papers (2024-03-21T22:46:27Z) - All-weather Multi-Modality Image Fusion: Unified Framework and 100k Benchmark [42.49073228252726]
Multi-modality image fusion (MMIF) combines complementary information from different image modalities to provide a more comprehensive and objective interpretation of scenes.
Existing MMIF methods lack the ability to resist different weather interferences in real-world scenes, preventing them from being useful in practical applications such as autonomous driving.
We propose an all-weather MMIF model to achieve effective multi-tasking in this context.
Experimental results in both real-world and synthetic scenes show that the proposed algorithm excels in detail recovery and multi-modality feature extraction.
arXiv Detail & Related papers (2024-02-03T09:02:46Z) - Learning Real-World Image De-Weathering with Imperfect Supervision [57.748585821252824]
Existing real-world de-weathering datasets often exhibit inconsistent illumination, position, and textures between the ground-truth images and the input degraded images.
We develop a Consistent Label Constructor (CLC) to generate a pseudo-label as consistent as possible with the input degraded image.
We combine the original imperfect labels and pseudo-labels to jointly supervise the de-weathering model by the proposed Information Allocation Strategy.
arXiv Detail & Related papers (2023-10-23T14:02:57Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a
Single Image using Diffusion Models [72.76182801289497]
We present a novel method, Aerial Diffusion, for generating aerial views from a single ground-view image using text guidance.
We address two main challenges corresponding to domain gap between the ground-view and the aerial view.
Aerial Diffusion is the first approach that performs ground-to-aerial translation in an unsupervised manner.
arXiv Detail & Related papers (2023-03-15T22:26:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.