Reducing the Representation Error of GAN Image Priors Using the Deep Decoder
- URL: http://arxiv.org/abs/2001.08747v2
- Date: Sat, 25 Oct 2025 23:22:53 GMT
- Title: Reducing the Representation Error of GAN Image Priors Using the Deep Decoder
- Authors: Mara Daniels, Paul Hand, Reinhard Heckel,
- Abstract summary: We show a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior and a Deep Decoder.<n>For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately.
- Score: 16.580772758959245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models, such as GANs, learn an explicit low-dimensional representation of a particular class of images, and so they may be used as natural image priors for solving inverse problems such as image restoration and compressive sensing. GAN priors have demonstrated impressive performance on these tasks, but they can exhibit substantial representation error for both in-distribution and out-of-distribution images, because of the mismatch between the learned, approximate image distribution and the data generating distribution. In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder. The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior. No knowledge of the specific inverse problem is needed in the training of the GAN underlying our method. For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution images. This model provides a method for extensibly and cheaply leveraging both the benefits of learned and unlearned image recovery priors in inverse problems.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - JoIN: Joint GANs Inversion for Intrinsic Image Decomposition [16.02463667910604]
We propose to solve ill-posed inverse imaging problems using a bank of Generative Adversarial Networks (GAN)
Our method builds on the demonstrated success of GANs to capture complex image distributions.
arXiv Detail & Related papers (2023-05-18T22:09:32Z) - Latent Multi-Relation Reasoning for GAN-Prior based Image
Super-Resolution [61.65012981435095]
LAREN is a graph-based disentanglement that constructs a superior disentangled latent space via hierarchical multi-relation reasoning.
We show that LAREN achieves superior large-factor image SR and outperforms the state-of-the-art consistently across multiple benchmarks.
arXiv Detail & Related papers (2022-08-04T19:45:21Z) - Diverse super-resolution with pretrained deep hiererarchical VAEs [6.257821009472099]
We investigate the problem of producing diverse solutions to an image super-resolution problem.
We train a lightweight encoder to encode low-resolution images in the latent space of a pretrained HVAE.
At inference, we combine the low-resolution encoder and the pretrained generative model to super-resolve an image.
arXiv Detail & Related papers (2022-05-20T17:57:41Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - StyleGAN-induced data-driven regularization for inverse problems [2.5138572116292686]
Recent advances in generative adversarial networks (GANs) have opened up the possibility of generating high-resolution images that were impossible to produce previously.
We develop a framework that utilizes the full potential of a pre-trained StyleGAN2 generator for constructing the prior distribution on the underlying image.
Considering the inverse problems of image inpainting and super-resolution, we demonstrate that the proposed approach is competitive with, and sometimes superior to, state-of-the-art GAN-based image reconstruction methods.
arXiv Detail & Related papers (2021-10-07T22:25:30Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Invertible generative models for inverse problems: mitigating representation error and dataset bias [6.07645721775351]
Trained generative models have shown remarkable performance as priors for inverse problems in imaging.<n>We demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems.
arXiv Detail & Related papers (2019-05-28T08:27:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.