BIGPrior: Towards Decoupling Learned Prior Hallucination and Data
Fidelity in Image Restoration
- URL: http://arxiv.org/abs/2011.01406v3
- Date: Sat, 8 Jan 2022 11:47:43 GMT
- Title: BIGPrior: Towards Decoupling Learned Prior Hallucination and Data
Fidelity in Image Restoration
- Authors: Majed El Helou and Sabine S\"usstrunk
- Abstract summary: We present an approach with decoupled network-prior based hallucination and data fidelity terms.
We use network inversion to extract image prior information from a generative network.
Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods.
- Score: 14.34815548338413
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Classic image-restoration algorithms use a variety of priors, either
implicitly or explicitly. Their priors are hand-designed and their
corresponding weights are heuristically assigned. Hence, deep learning methods
often produce superior image restoration quality. Deep networks are, however,
capable of inducing strong and hardly predictable hallucinations. Networks
implicitly learn to be jointly faithful to the observed data while learning an
image prior; and the separation of original data and hallucinated data
downstream is then not possible. This limits their wide-spread adoption in
image restoration. Furthermore, it is often the hallucinated part that is
victim to degradation-model overfitting.
We present an approach with decoupled network-prior based hallucination and
data fidelity terms. We refer to our framework as the Bayesian Integration of a
Generative Prior (BIGPrior). Our method is rooted in a Bayesian framework and
tightly connected to classic restoration methods. In fact, it can be viewed as
a generalization of a large family of classic restoration algorithms. We use
network inversion to extract image prior information from a generative network.
We show that, on image colorization, inpainting and denoising, our framework
consistently improves the inversion results. Our method, though partly reliant
on the quality of the generative network inversion, is competitive with
state-of-the-art supervised and task-specific restoration methods. It also
provides an additional metric that sets forth the degree of prior reliance per
pixel relative to data fidelity.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - ROMNet: Renovate the Old Memories [25.41639794384076]
We present a novel reference-based end-to-end learning framework that can jointly repair and colorize degraded legacy pictures.
We also create, to our knowledge, the first public and real-world old photo dataset with paired ground truth for evaluating old photo restoration models.
arXiv Detail & Related papers (2022-02-05T17:48:15Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - Plug-and-Play Image Restoration with Deep Denoiser Prior [186.84724418955054]
We show that a denoiser can implicitly serve as the image prior for model-based methods to solve many inverse problems.
We set up a benchmark deep denoiser prior by training a highly flexible and effective CNN denoiser.
We then plug the deep denoiser prior as a modular part into a half quadratic splitting based iterative algorithm to solve various image restoration problems.
arXiv Detail & Related papers (2020-08-31T17:18:58Z) - Pyramid Attention Networks for Image Restoration [124.34970277136061]
Self-similarity refers to the image prior widely used in image restoration algorithms.
Recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities.
We present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid.
arXiv Detail & Related papers (2020-04-28T21:12:36Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.