When Color-Space Decoupling Meets Diffusion for Adverse-Weather Image Restoration
- URL: http://arxiv.org/abs/2509.17024v1
- Date: Sun, 21 Sep 2025 10:39:06 GMT
- Title: When Color-Space Decoupling Meets Diffusion for Adverse-Weather Image Restoration
- Authors: Wenxuan Fang, Jili Fan, Chao Wang, Xiantao Hu, Jiangwei Weng, Ying Tai, Jian Yang, Jun Li,
- Abstract summary: We present textit Lumina-Chroma Decomposition Network (LCDN) and textit Lumina-Guided Diffusion Model (LGDM)<n> LCDN processes degraded images in the YCbCr color space, separately handling degradation-related luminance and degradation-invariant chrominance components.<n>LGDM incorporates a textitDynamic Time Step Loss to optimize the denoising network, ensuring a balanced recovery of both low- and high-frequency features in the image.
- Score: 31.345996524182127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adverse Weather Image Restoration (AWIR) is a highly challenging task due to the unpredictable and dynamic nature of weather-related degradations. Traditional task-specific methods often fail to generalize to unseen or complex degradation types, while recent prompt-learning approaches depend heavily on the degradation estimation capabilities of vision-language models, resulting in inconsistent restorations. In this paper, we propose \textbf{LCDiff}, a novel framework comprising two key components: \textit{Lumina-Chroma Decomposition Network} (LCDN) and \textit{Lumina-Guided Diffusion Model} (LGDM). LCDN processes degraded images in the YCbCr color space, separately handling degradation-related luminance and degradation-invariant chrominance components. This decomposition effectively mitigates weather-induced degradation while preserving color fidelity. To further enhance restoration quality, LGDM leverages degradation-related luminance information as a guiding condition, eliminating the need for explicit degradation prompts. Additionally, LGDM incorporates a \textit{Dynamic Time Step Loss} to optimize the denoising network, ensuring a balanced recovery of both low- and high-frequency features in the image. Finally, we present DriveWeather, a comprehensive all-weather driving dataset designed to enable robust evaluation. Extensive experiments demonstrate that our approach surpasses state-of-the-art methods, setting a new benchmark in AWIR. The dataset and code are available at: https://github.com/fiwy0527/LCDiff.
Related papers
- All-in-One Video Restoration under Smoothly Evolving Unknown Weather Degradations [102.94052335735326]
All-in-one image restoration aims to recover clean images from diverse unknown degradations using a single model.<n>Existing approaches primarily focus on frame-wise degradation variation, overlooking the temporal continuity that naturally exists in real-world degradation processes.<n>We introduce the Smoothly Evolving Unknown Degradations (SEUD) scenario, where both the active degradation set and degradation intensity change continuously over time.
arXiv Detail & Related papers (2026-01-02T02:20:57Z) - Enhancing Infrared Vision: Progressive Prompt Fusion Network and Benchmark [58.61079960074608]
Existing infrared image enhancement methods focus on tackling individual degradations.<n>All-in-one enhancement methods, commonly applied to RGB sensors, often demonstrate limited effectiveness.
arXiv Detail & Related papers (2025-10-10T12:55:54Z) - WeatherCycle: Unpaired Multi-Weather Restoration via Color Space Decoupled Cycle Learning [30.62082910458533]
Unsupervised image restoration under multi-weather conditions remains a fundamental yet underexplored challenge.<n>We propose textbfWeatherCycle, a unified framework that reformulates weather restoration as a bidirectional degradation-content translation cycle.<n>Our method achieves state-of-the-art performance among unsupervised approaches, with strong generalization to complex weather degradations.
arXiv Detail & Related papers (2025-09-27T06:44:27Z) - CIVQLLIE: Causal Intervention with Vector Quantization for Low-Light Image Enhancement [5.948286668586509]
Current low-light image enhancement methods face significant challenges.<n>We propose CIVQLLIE, a novel framework that leverages the power of discrete representation learning through causal reasoning.
arXiv Detail & Related papers (2025-08-05T11:36:39Z) - EvRWKV: A Continuous Interactive RWKV Framework for Effective Event-Guided Low-Light Image Enhancement [10.556338127441167]
Event cameras offer high dynamic range and microsecond temporal resolution by asynchronously capturing brightness changes.<n>We propose EvRWKV, a novel framework that enables continuous cross-modal interaction through dual-domain processing.<n>We show that EvRWKV achieves state-of-the-art performance, effectively enhancing image quality by suppressing noise, restoring structural details, and improving visual clarity in challenging low-light conditions.
arXiv Detail & Related papers (2025-07-01T19:05:04Z) - DEAL: Data-Efficient Adversarial Learning for High-Quality Infrared Imaging [47.22313650077835]
We introduce thermal degradation simulation integrated into the training process via a mini-max optimization.<n>The simulation is dynamic to maximize objective functions, thus capturing a broad spectrum of degraded data distributions.<n>This approach enables training with limited data, thereby improving model performance.
arXiv Detail & Related papers (2025-03-02T14:15:44Z) - Rethinking High-speed Image Reconstruction Framework with Spike Camera [48.627095354244204]
Spike cameras generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras.<n>We introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms.<n>Our experiments on real-world low-light datasets demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images.
arXiv Detail & Related papers (2025-01-08T13:00:17Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition [52.89441679581216]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise.<n>We present an innovative video decomposition strategy that incorporates view-independent and view-dependent components.<n>Our framework consistently outperforms existing methods, establishing a new SOTA performance.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - Joint Conditional Diffusion Model for Image Restoration with Mixed Degradations [29.14467633167042]
We propose a new method for image restoration in adverse weather conditions.
We use a mixed degradation model based on atmosphere scattering model to guide the whole restoration process.
Experiments on both multi-weather and weather-specific datasets demonstrate the superiority of our method over state-of-the-art competing methods.
arXiv Detail & Related papers (2024-04-11T14:07:16Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.<n>Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.<n>We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.