Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration
- URL: http://arxiv.org/abs/2408.15994v2
- Date: Wed, 07 May 2025 19:54:10 GMT
- Title: Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration
- Authors: Xu Zhang, Jiaqi Ma, Guoli Wang, Qian Zhang, Huan Zhang, Lefei Zhang,
- Abstract summary: Perceive-IR is a novel backbone-agnostic All-in-One image restoration framework for fine-grained quality control.<n>Its modular structure allows core components to function independently of specific backbones, enabling seamless integration into advanced restoration models.
- Score: 33.163161549726446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing All-in-One image restoration methods often fail to perceive degradation types and severity levels simultaneously, overlooking the importance of fine-grained quality perception. Moreover, these methods often utilize highly customized backbones, which hinder their adaptability and integration into more advanced restoration networks. To address these limitations, we propose Perceive-IR, a novel backbone-agnostic All-in-One image restoration framework designed for fine-grained quality control across various degradation types and severity levels. Its modular structure allows core components to function independently of specific backbones, enabling seamless integration into advanced restoration models without significant modifications. Specifically, Perceive-IR operates in two key stages: 1) multi-level quality-driven prompt learning stage, where a fine-grained quality perceiver is meticulously trained to discern three tier quality levels by optimizing the alignment between prompts and images within the CLIP perception space. This stage ensures a nuanced understanding of image quality, laying the groundwork for subsequent restoration; 2) restoration stage, where the quality perceiver is seamlessly integrated with a difficulty-adaptive perceptual loss, forming a quality-aware learning strategy. This strategy not only dynamically differentiates sample learning difficulty but also achieves fine-grained quality control by driving the restored image toward the ground truth while pulling it away from both low- and medium-quality samples.
Related papers
- Bridging Degradation Discrimination and Generation for Universal Image Restoration [18.085443590549087]
This paper presents a novel approach, Bridging Degradation discrimination and Generation (BDG)<n>We propose the Multi-Angle and multi-Scale Gray Level Co-occurrence Matrix (MAS-GLCM) and demonstrate its effectiveness in performing fine-grained discrimination of degradation types and levels.<n>The objective is to preserve the diffusion model's capability of restoring rich textures while simultaneously integrating the discriminative information from the MAS-GLCM into the restoration process.
arXiv Detail & Related papers (2026-01-31T07:46:28Z) - Learning to Restore Multi-Degraded Images via Ingredient Decoupling and Task-Aware Path Adaptation [51.10017611491389]
Real-world images often suffer from multiple coexisting degradations, such as rain, noise, and haze coexisting in a single image.<n>We propose an adaptive multi-degradation image restoration network that reconstructs images by leveraging decoupled representations of degradation ingredients.<n>The resulting tightly integrated architecture, termed IMDNet, is extensively validated through experiments.
arXiv Detail & Related papers (2025-11-07T01:50:36Z) - Beyond Degradation Redundancy: Contrastive Prompt Learning for All-in-One Image Restoration [109.38288333994407]
Contrastive Prompt Learning (CPL) is a novel framework that fundamentally enhances prompt-task alignment.
Our framework establishes new state-of-the-art performance while maintaining parameter efficiency, offering a principled solution for unified image restoration.
arXiv Detail & Related papers (2025-04-14T08:24:57Z) - IQPFR: An Image Quality Prior for Blind Face Restoration and Beyond [56.99331967165238]
Blind Face Restoration (BFR) addresses the challenge of reconstructing degraded low-quality (LQ) facial images into high-quality (HQ) outputs.<n>We propose a novel framework that incorporates an Image Quality Prior (IQP) derived from No-Reference Image Quality Assessment (NR-IQA) models.<n>Our method outperforms state-of-the-art techniques across multiple benchmarks.
arXiv Detail & Related papers (2025-03-12T11:39:51Z) - UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity [79.90839080916913]
We present our UniRestorer with improved restoration performance.
Specifically, we perform hierarchical clustering on degradation space, and train a multi-granularity mixture-of-experts (MoE) restoration model.
In contrast to existing degradation-agnostic and -aware methods, UniRestorer can leverage degradation estimation to benefit degradation specific restoration.
arXiv Detail & Related papers (2024-12-28T14:09:08Z) - Adaptive Blind All-in-One Image Restoration [15.726917603679716]
Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions.<n>We introduce ABAIR, a simple yet effective adaptive blind all-in-one restoration model that handles multiple degradations and generalizes well to unseen distortions.<n>Our model not only surpasses state-of-the-art performance on five- and three-task IR setups but also demonstrates superior generalization to unseen degradations and composite distortions.
arXiv Detail & Related papers (2024-11-27T14:58:08Z) - PromptHSI: Universal Hyperspectral Image Restoration Framework for Composite Degradation [28.10512516485237]
We propose PromptHSI, the first universal AiO HSI restoration framework.
We decompose text prompts into intensity and bias controllers to guide the restoration process.
Our unified architecture excels at both fine-grained recovery and global information restoration tasks.
arXiv Detail & Related papers (2024-11-24T17:08:58Z) - Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model [55.46927355649013]
We introduce a novel Multi-modal Guided Real-World Face Restoration technique.
MGFR can mitigate the generation of false facial attributes and identities.
We present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities.
arXiv Detail & Related papers (2024-10-05T13:46:56Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration [58.11518043688793]
MPerceiver is a novel approach to enhance adaptiveness, generalizability and fidelity for all-in-one image restoration.
MPerceiver is trained on 9 tasks for all-in-one IR and outperforms state-of-the-art task-specific methods across most tasks.
arXiv Detail & Related papers (2023-12-05T17:47:11Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - PromptIR: Prompting for All-in-One Blind Image Restoration [64.02374293256001]
We present a prompt-based learning approach, PromptIR, for All-In-One image restoration.
Our method uses prompts to encode degradation-specific information, which is then used to dynamically guide the restoration network.
PromptIR offers a generic and efficient plugin module with few lightweight prompts.
arXiv Detail & Related papers (2023-06-22T17:59:52Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Implicit Subspace Prior Learning for Dual-Blind Face Restoration [66.67059961379923]
A novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration.
Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods.
arXiv Detail & Related papers (2020-10-12T08:04:24Z) - Blind Image Restoration without Prior Knowledge [0.22940141855172028]
We present the Self-Normalization Side-Chain (SCNC), a novel approach to blind universal restoration in which no prior knowledge of the degradation is needed.
The SCNC can be added to any existing CNN topology, and is trained along with the rest of the network in an end-to-end manner.
arXiv Detail & Related papers (2020-03-03T19:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.