UGPNet: Universal Generative Prior for Image Restoration
- URL: http://arxiv.org/abs/2401.00370v1
- Date: Sun, 31 Dec 2023 02:16:29 GMT
- Title: UGPNet: Universal Generative Prior for Image Restoration
- Authors: Hwayoon Lee, Kyoungkook Kang, Hyeongmin Lee, Seung-Hwan Baek, Sunghyun
Cho
- Abstract summary: We propose UGPNet, a universal image restoration framework.
We show that UGPNet can successfully exploit both regression and generative methods for high-fidelity image restoration.
Our experiments on deblurring, denoising, and super-resolution demonstrate that UGPNet can successfully exploit both regression and generative methods for high-fidelity image restoration.
- Score: 26.872219158636604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent image restoration methods can be broadly categorized into two classes:
(1) regression methods that recover the rough structure of the original image
without synthesizing high-frequency details and (2) generative methods that
synthesize perceptually-realistic high-frequency details even though the
resulting image deviates from the original structure of the input. While both
directions have been extensively studied in isolation, merging their benefits
with a single framework has been rarely studied. In this paper, we propose
UGPNet, a universal image restoration framework that can effectively achieve
the benefits of both approaches by simply adopting a pair of an existing
regression model and a generative model. UGPNet first restores the image
structure of a degraded input using a regression model and synthesizes a
perceptually-realistic image with a generative model on top of the regressed
output. UGPNet then combines the regressed output and the synthesized output,
resulting in a final result that faithfully reconstructs the structure of the
original image in addition to perceptually-realistic textures. Our extensive
experiments on deblurring, denoising, and super-resolution demonstrate that
UGPNet can successfully exploit both regression and generative methods for
high-fidelity image restoration.
Related papers
- Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step [77.86514804787622]
Chain-of-Thought (CoT) reasoning has been extensively explored in large models to tackle complex understanding tasks.
We provide the first comprehensive investigation of the potential of CoT reasoning to enhance autoregressive image generation.
We propose the Potential Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image generation.
arXiv Detail & Related papers (2025-01-23T18:59:43Z) - Latent Diffusion Prior Enhanced Deep Unfolding for Snapshot Spectral Compressive Imaging [17.511583657111792]
Snapshot spectral imaging reconstruction aims to reconstruct three-dimensional spatial-spectral images from a single-shot two-dimensional compressed measurement.
We introduce a generative model, namely the latent diffusion model (LDM), to generate degradation-free prior to deep unfolding method.
arXiv Detail & Related papers (2023-11-24T04:55:20Z) - Nest-DGIL: Nesterov-optimized Deep Geometric Incremental Learning for CS
Image Reconstruction [9.54126979075279]
We propose a deep geometric incremental learning framework based on the second Nesterov proximal gradient optimization.
Our reconstruction framework is decomposed into four modules including general linear reconstruction, cascade geometric incremental restoration, Nesterov acceleration, and post-processing.
arXiv Detail & Related papers (2023-08-06T15:47:03Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z) - Deep Neural Networks are Surprisingly Reversible: A Baseline for
Zero-Shot Inversion [90.65667807498086]
This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation.
We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers.
arXiv Detail & Related papers (2021-07-13T18:01:43Z) - Inverting Adversarially Robust Networks for Image Synthesis [37.927552662984034]
We propose the use of robust representations as a perceptual primitive for feature inversion models.
We empirically show that adopting robust representations as an image prior significantly improves the reconstruction accuracy of CNN-based feature inversion models.
Following these findings, we propose an encoding-decoding network based on robust representations and show its advantages for applications such as anomaly detection, style transfer and image denoising.
arXiv Detail & Related papers (2021-06-13T05:51:00Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.