TAPE: Task-Agnostic Prior Embedding for Image Restoration
- URL: http://arxiv.org/abs/2203.06074v1
- Date: Fri, 11 Mar 2022 16:52:47 GMT
- Title: TAPE: Task-Agnostic Prior Embedding for Image Restoration
- Authors: Lin Liu, Lingxi Xie, Xiaopeng Zhang, Shanxin Yuan, Xiangyu Chen,
Wengang Zhou, Houqiang Li, Qi Tian
- Abstract summary: We propose a novel approach that embeds a task-agnostic prior into a transformer.
Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of three stages, namely, task-agnostic pre-training, task-agnostic fine-tuning, and task-specific fine-tuning.
Experiments on various types of degradation validate the effectiveness of TAPE.
- Score: 194.61997784161218
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning an generalized prior for natural image restoration is an important
yet challenging task. Early methods mostly involved handcrafted priors
including normalized sparsity, L0 gradients, dark channel priors, etc.
Recently, deep neural networks have been used to learn various image priors but
do not guarantee to generalize. In this paper, we propose a novel approach that
embeds a task-agnostic prior into a transformer. Our approach, named
Task-Agnostic Prior Embedding (TAPE), consists of three stages, namely,
task-agnostic pre-training, task-agnostic fine-tuning, and task-specific
fine-tuning, where the first one embeds prior knowledge about natural images
into the transformer and the latter two extracts the knowledge to assist
downstream image restoration. Experiments on various types of degradation
validate the effectiveness of TAPE. The image restoration performance in terms
of PSNR is improved by as much as 1.45 dB and even outperforms task-specific
algorithms. More importantly, TAPE shows the ability of disentangling
generalized image priors from degraded images, which enjoys favorable transfer
ability to unknown downstream tasks.
Related papers
- Exploiting Diffusion Prior for Task-driven Image Restoration [47.36902705025445]
Task-driven image restoration (TDIR) has recently emerged to address performance drops in high-level vision tasks caused by low-quality (LQ) inputs.<n>Previous TDIR methods struggle to handle practical scenarios in which images are degraded by multiple complex factors.<n>We propose EDTR, which effectively harnesses the power of diffusion prior to restore task-relevant details.
arXiv Detail & Related papers (2025-07-30T08:05:49Z) - Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration [33.163161549726446]
Perceive-IR is an all-in-one image restorer designed to achieve fine-grained quality control.
In the prompt learning stage, we leverage prompt learning to acquire a fine-grained quality perceiver capable of distinguishing three-tier quality levels.
For the restoration stage, a semantic guidance module and compact feature extraction are proposed to further promote the restoration process.
arXiv Detail & Related papers (2024-08-28T17:58:54Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - DIFFNAT: Improving Diffusion Image Quality Using Natural Image
Statistics [39.457325373431836]
We propose a generic "naturalness" preserving loss function, viz., kurtosis concentration (KC) loss.
Our motivation stems from the projected kurtosis concentration property of natural images.
To retain the "naturalness" of the generated images, we enforce reducing the gap between the highest and lowest kurtosis values.
arXiv Detail & Related papers (2023-11-16T10:28:59Z) - LTT-GAN: Looking Through Turbulence by Inverting GANs [86.25869403782957]
We propose the first turbulence mitigation method that makes use of visual priors encapsulated by a well-trained GAN.
Based on the visual priors, we propose to learn to preserve the identity of restored images on a periodic contextual distance.
Our method significantly outperforms prior art in both the visual quality and face verification accuracy of restored results.
arXiv Detail & Related papers (2021-12-04T16:42:13Z) - Implicit Subspace Prior Learning for Dual-Blind Face Restoration [66.67059961379923]
A novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration.
Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods.
arXiv Detail & Related papers (2020-10-12T08:04:24Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Blind Image Restoration without Prior Knowledge [0.22940141855172028]
We present the Self-Normalization Side-Chain (SCNC), a novel approach to blind universal restoration in which no prior knowledge of the degradation is needed.
The SCNC can be added to any existing CNN topology, and is trained along with the rest of the network in an end-to-end manner.
arXiv Detail & Related papers (2020-03-03T19:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.