Degradation-Aware All-in-One Image Restoration via Latent Prior Encoding
- URL: http://arxiv.org/abs/2509.17792v2
- Date: Fri, 26 Sep 2025 13:01:23 GMT
- Title: Degradation-Aware All-in-One Image Restoration via Latent Prior Encoding
- Authors: S M A Sharif, Abdur Rehman, Fayaz Ali Dharejo, Radu Timofte, Rizwan Ali Naqvi,
- Abstract summary: Real-world images often suffer from spatially diverse degradations such as haze, rain, snow, and low-light.<n>Existing all-in-one restoration approaches either depend on external text prompts or embed hand-crafted architectural priors.<n>We propose to reframe AIR as learned latent prior inference, where degradation-aware representations are automatically inferred from the input.
- Score: 46.146259918814735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world images often suffer from spatially diverse degradations such as haze, rain, snow, and low-light, significantly impacting visual quality and downstream vision tasks. Existing all-in-one restoration (AIR) approaches either depend on external text prompts or embed hand-crafted architectural priors (e.g., frequency heuristics); both impose discrete, brittle assumptions that weaken generalization to unseen or mixed degradations. To address this limitation, we propose to reframe AIR as learned latent prior inference, where degradation-aware representations are automatically inferred from the input without explicit task cues. Based on latent priors, we formulate AIR as a structured reasoning paradigm: (1) which features to route (adaptive feature selection), (2) where to restore (spatial localization), and (3) what to restore (degradation semantics). We design a lightweight decoding module that efficiently leverages these latent encoded cues for spatially-adaptive restoration. Extensive experiments across six common degradation tasks, five compound settings, and previously unseen degradations demonstrate that our method outperforms state-of-the-art (SOTA) approaches, achieving an average PSNR improvement of 1.68 dB while being three times more efficient.
Related papers
- ClearAIR: A Human-Visual-Perception-Inspired All-in-One Image Restoration [40.50200240865161]
All-in-One Image Restoration (AiOIR) has advanced significantly, offering promising solutions for complex real-world degradations.<n>We propose ClearAIR, a novel AiOIR framework inspired by Human Visual Perception (HVP) and designed with a hierarchical, coarse-to-fine restoration strategy.<n> Experimental results show that ClearAIR achieves superior performance across diverse synthetic and real-world datasets.
arXiv Detail & Related papers (2026-01-06T06:55:08Z) - Manifold-aware Representation Learning for Degradation-agnostic Image Restoration [135.90908995927194]
Image Restoration (IR) aims to recover high quality images from degraded inputs affected by various corruptions such as noise, blur, haze, rain, and low light conditions.<n>We present MIRAGE, a unified framework for all in one IR that explicitly decomposes the input feature space into three semantically aligned parallel branches.<n>This modular decomposition significantly improves generalization and efficiency across diverse degradations.
arXiv Detail & Related papers (2025-05-24T12:52:10Z) - RestoreVAR: Visual Autoregressive Generation for All-in-One Image Restoration [51.77917733024544]
latent diffusion models (LDMs) have improved the perceptual quality of All-in-One image Restoration (AiOR) methods.<n>LDMs suffer from slow inference due to their iterative denoising process, rendering them impractical for time-sensitive applications.<n>Visual autoregressive modeling ( VAR) performs scale-space autoregression and achieves comparable performance to that of state-of-the-art diffusion transformers.
arXiv Detail & Related papers (2025-05-23T15:52:26Z) - PromptHSI: Universal Hyperspectral Image Restoration with Vision-Language Modulated Frequency Adaptation [28.105125164852367]
We propose PromptHSI, the first universal AiO HSI restoration framework.<n>Our approach decomposes text prompts into intensity and bias controllers that effectively guide the restoration process.<n>Our architecture excels at both fine-grained recovery and global information restoration across diverse degradation scenarios.
arXiv Detail & Related papers (2024-11-24T17:08:58Z) - All-in-one Weather-degraded Image Restoration via Adaptive Degradation-aware Self-prompting Model [23.940339806402882]
Existing approaches for all-in-one weather-degraded image restoration suffer from inefficiencies in leveraging degradation-aware priors.
We develop an adaptive degradation-aware self-prompting model (ADSM) for all-in-one weather-degraded image restoration.
arXiv Detail & Related papers (2024-11-12T00:07:16Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Cross-Consistent Deep Unfolding Network for Adaptive All-In-One Video
Restoration [78.14941737723501]
We propose a Cross-consistent Deep Unfolding Network (CDUN) for All-In-One VR.
By orchestrating two cascading procedures, CDUN achieves adaptive processing for diverse degradations.
In addition, we introduce a window-based inter-frame fusion strategy to utilize information from more adjacent frames.
arXiv Detail & Related papers (2023-09-04T14:18:00Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Implicit Subspace Prior Learning for Dual-Blind Face Restoration [66.67059961379923]
A novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration.
Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods.
arXiv Detail & Related papers (2020-10-12T08:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.