Generalizing to Out-of-Sample Degradations via Model Reprogramming
- URL: http://arxiv.org/abs/2403.05886v1
- Date: Sat, 9 Mar 2024 11:56:26 GMT
- Title: Generalizing to Out-of-Sample Degradations via Model Reprogramming
- Authors: Runhua Jiang, Yahong Han
- Abstract summary: Out-of-Sample Restoration (OSR) task aims to develop restoration models capable of handling out-of-sample degradations.
We propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions.
- Score: 29.56470202794348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing image restoration models are typically designed for specific tasks
and struggle to generalize to out-of-sample degradations not encountered during
training. While zero-shot methods can address this limitation by fine-tuning
model parameters on testing samples, their effectiveness relies on predefined
natural priors and physical models of specific degradations. Nevertheless,
determining out-of-sample degradations faced in real-world scenarios is always
impractical. As a result, it is more desirable to train restoration models with
inherent generalization ability. To this end, this work introduces the
Out-of-Sample Restoration (OSR) task, which aims to develop restoration models
capable of handling out-of-sample degradations. An intuitive solution involves
pre-translating out-of-sample degradations to known degradations of restoration
models. However, directly translating them in the image space could lead to
complex image translation issues. To address this issue, we propose a model
reprogramming framework, which translates out-of-sample degradations by quantum
mechanic and wave functions. Specifically, input images are decoupled as wave
functions of amplitude and phase terms. The translation of out-of-sample
degradation is performed by adapting the phase term. Meanwhile, the image
content is maintained and enhanced in the amplitude term. By taking these two
terms as inputs, restoration models are able to handle out-of-sample
degradations without fine-tuning. Through extensive experiments across multiple
evaluation cases, we demonstrate the effectiveness and flexibility of our
proposed framework. Our codes are available at
\href{https://github.com/ddghjikle/Out-of-sample-restoration}{Github}.
Related papers
- Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models [14.25759541950917]
This work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR)
Our base diffusion model is the image restoration SDE (IR-SDE)
arXiv Detail & Related papers (2024-04-15T12:34:21Z) - Deep Equilibrium Diffusion Restoration with Parallel Sampling [120.15039525209106]
Diffusion model-based image restoration (IR) aims to use diffusion models to recover high-quality (HQ) images from degraded images, achieving promising performance.
Most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
In this work, we aim to rethink the diffusion model-based IR models through a different perspective, i.e., a deep equilibrium (DEQ) fixed point system, called DeqIR.
arXiv Detail & Related papers (2023-11-20T08:27:56Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Perceptual Image Restoration with High-Quality Priori and Degradation
Learning [28.93489249639681]
We show that our model performs well in measuring the similarity between restored and degraded images.
Our simultaneous restoration and enhancement framework generalizes well to real-world complicated degradation types.
arXiv Detail & Related papers (2021-03-04T13:19:50Z) - Anytime Sampling for Autoregressive Models via Ordered Autoencoding [88.01906682843618]
Autoregressive models are widely used for tasks such as image and audio generation.
The sampling process of these models does not allow interruptions and cannot adapt to real-time computational resources.
We propose a new family of autoregressive models that enables anytime sampling.
arXiv Detail & Related papers (2021-02-23T05:13:16Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.