ProRes: Exploring Degradation-aware Visual Prompt for Universal Image
Restoration
- URL: http://arxiv.org/abs/2306.13653v1
- Date: Fri, 23 Jun 2023 17:59:47 GMT
- Title: ProRes: Exploring Degradation-aware Visual Prompt for Universal Image
Restoration
- Authors: Jiaqi Ma, Tianheng Cheng, Guoli Wang, Qian Zhang, Xinggang Wang, Lefei
Zhang
- Abstract summary: We present Degradation-aware Visual Prompts, which encode various types of image degradation into unified visual prompts.
These degradation-aware prompts provide control over image processing and allow weighted combinations for customized image restoration.
We then leverage degradation-aware visual prompts to establish a controllable universal model for image restoration.
- Score: 46.87227160492818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image restoration aims to reconstruct degraded images, e.g., denoising or
deblurring. Existing works focus on designing task-specific methods and there
are inadequate attempts at universal methods. However, simply unifying multiple
tasks into one universal architecture suffers from uncontrollable and undesired
predictions. To address those issues, we explore prompt learning in universal
architectures for image restoration tasks. In this paper, we present
Degradation-aware Visual Prompts, which encode various types of image
degradation, e.g., noise and blur, into unified visual prompts. These
degradation-aware prompts provide control over image processing and allow
weighted combinations for customized image restoration. We then leverage
degradation-aware visual prompts to establish a controllable and universal
model for image restoration, called ProRes, which is applicable to an extensive
range of image restoration tasks. ProRes leverages the vanilla Vision
Transformer (ViT) without any task-specific designs. Furthermore, the
pre-trained ProRes can easily adapt to new tasks through efficient prompt
tuning with only a few images. Without bells and whistles, ProRes achieves
competitive performance compared to task-specific methods and experiments can
demonstrate its ability for controllable restoration and adaptation for new
tasks. The code and models will be released in
\url{https://github.com/leonmakise/ProRes}.
Related papers
- UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation [50.27688690379488]
Existing unified methods treat multi-degradation image restoration as a multi-task learning problem.
We propose a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning.
Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks.
arXiv Detail & Related papers (2024-09-30T11:16:56Z) - Restorer: Removing Multi-Degradation with All-Axis Attention and Prompt Guidance [12.066756224383827]
textbfRestorer is a novel Transformer-based all-in-one image restoration model.
It can handle composite degradation in real-world scenarios without requiring additional training.
It is efficient during inference, suggesting the potential in real-world applications.
arXiv Detail & Related papers (2024-06-18T13:18:32Z) - InstructIR: High-Quality Image Restoration Following Human Instructions [61.1546287323136]
We present the first approach that uses human-written instructions to guide the image restoration model.
Our method, InstructIR, achieves state-of-the-art results on several restoration tasks.
arXiv Detail & Related papers (2024-01-29T18:53:33Z) - Improving Image Restoration through Removing Degradations in Textual
Representations [60.79045963573341]
We introduce a new perspective for improving image restoration by removing degradation in the textual representations of a degraded image.
To address the cross-modal assistance, we propose to map the degraded images into textual representations for removing the degradations.
In particular, We ingeniously embed an image-to-text mapper and text restoration module into CLIP-equipped text-to-image models to generate the guidance.
arXiv Detail & Related papers (2023-12-28T19:18:17Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - Prompt-In-Prompt Learning for Universal Image Restoration [38.81186629753392]
We propose novel Prompt-In-Prompt learning for universal image restoration, named PIP.
We present two novel prompts, a degradation-aware prompt to encode high-level degradation knowledge and a basic restoration prompt to provide essential low-level information.
By doing so, the resultant PIP works as a plug-and-play module to enhance existing restoration models for universal image restoration.
arXiv Detail & Related papers (2023-12-08T13:36:01Z) - PromptIR: Prompting for All-in-One Blind Image Restoration [64.02374293256001]
We present a prompt-based learning approach, PromptIR, for All-In-One image restoration.
Our method uses prompts to encode degradation-specific information, which is then used to dynamically guide the restoration network.
PromptIR offers a generic and efficient plugin module with few lightweight prompts.
arXiv Detail & Related papers (2023-06-22T17:59:52Z) - Restore Anything Pipeline: Segment Anything Meets Image Restoration [27.93942383342829]
We introduce the Restore Anything Pipeline (RAP), a novel interactive and per-object level image restoration approach.
RAP incorporates image segmentation through the recent Segment Anything Model (SAM) into a controllable image restoration model.
RAP produces superior visual results compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T14:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.