Blind Image Restoration with Flow Based Priors
- URL: http://arxiv.org/abs/2009.04583v1
- Date: Wed, 9 Sep 2020 21:40:11 GMT
- Title: Blind Image Restoration with Flow Based Priors
- Authors: Leonhard Helminger, Michael Bernasconi, Abdelaziz Djelouah, Markus
Gross, Christopher Schroers
- Abstract summary: In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
- Score: 19.190289348734215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image restoration has seen great progress in the last years thanks to the
advances in deep neural networks. Most of these existing techniques are trained
using full supervision with suitable image pairs to tackle a specific
degradation. However, in a blind setting with unknown degradations this is not
possible and a good prior remains crucial. Recently, neural network based
approaches have been proposed to model such priors by leveraging either
denoising autoencoders or the implicit regularization captured by the neural
network structure itself. In contrast to this, we propose using normalizing
flows to model the distribution of the target content and to use this as a
prior in a maximum a posteriori (MAP) formulation. By expressing the MAP
optimization process in the latent space through the learned bijective mapping,
we are able to obtain solutions through gradient descent. To the best of our
knowledge, this is the first work that explores normalizing flows as prior in
image enhancement problems. Furthermore, we present experimental results for a
number of different degradations on data sets varying in complexity and show
competitive results when comparing with the deep image prior approach.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - Blind Image Deconvolution Using Variational Deep Image Prior [4.92175281564179]
This paper proposes a new variational deep image prior (VDIP) for blind image deconvolution.
VDIP exploits additive hand-crafted image priors on latent sharp images and approximates a distribution for each pixel to avoid suboptimal solutions.
Experiments show that the generated images have better quality than that of the original DIP on benchmark datasets.
arXiv Detail & Related papers (2022-02-01T01:33:58Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Regularization via deep generative models: an analysis point of view [8.818465117061205]
This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
arXiv Detail & Related papers (2021-01-21T15:04:57Z) - Using Deep Image Priors to Generate Counterfactual Explanations [38.62513524757573]
A deep image prior (DIP) can be used to obtain pre-images from latent representation encodings.
We propose a novel regularization strategy based on an auxiliary loss estimator jointly trained with the predictor.
arXiv Detail & Related papers (2020-10-22T20:40:44Z) - Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep
Gradient Descent [4.029853654012035]
Recent advances in inverse problems leverage powerful data-driven models, e.g., deep neural networks.
We develop a scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks.
arXiv Detail & Related papers (2020-07-20T09:43:31Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.