Regularization via deep generative models: an analysis point of view
- URL: http://arxiv.org/abs/2101.08661v1
- Date: Thu, 21 Jan 2021 15:04:57 GMT
- Title: Regularization via deep generative models: an analysis point of view
- Authors: Thomas Oberlin and Mathieu Verm
- Abstract summary: This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
- Score: 8.818465117061205
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes a new way of regularizing an inverse problem in imaging
(e.g., deblurring or inpainting) by means of a deep generative neural network.
Compared to end-to-end models, such approaches seem particularly interesting
since the same network can be used for many different problems and experimental
conditions, as soon as the generative model is suited to the data. Previous
works proposed to use a synthesis framework, where the estimation is performed
on the latent vector, the solution being obtained afterwards via the decoder.
Instead, we propose an analysis formulation where we directly optimize the
image itself and penalize the latent vector. We illustrate the interest of such
a formulation by running experiments of inpainting, deblurring and
super-resolution. In many cases our technique achieves a clear improvement of
the performance and seems to be more robust, in particular with respect to
initialization.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Convergence Properties of Score-Based Models for Linear Inverse Problems Using Graduated Optimisation [44.99833362998488]
We show that score-based generative models (SGMs) can be used to solve inverse problems.
We show that we are able to recover high-Ms images, independent of the initial value.
The source is publicly available on GitHub.
arXiv Detail & Related papers (2024-04-29T13:47:59Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Generative Adversarial Network (GAN) based Image-Deblurring [0.0]
We show the effective of spectral regularization methods, and point out the linking between the spectral filtering result and the solution of the regularization optimization objective.
For ill-posed problems like image deblurring, the optimization objective contains a regularization term that encodes our prior knowledge into the solution.
Based on the idea of Wasserstein generative adversarial models, we can train a CNN to learn the regularization functional.
arXiv Detail & Related papers (2022-08-24T15:46:09Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Deep Equilibrium Architectures for Inverse Problems in Imaging [14.945209750917483]
Recent efforts on solving inverse problems in imaging via deep neural networks use architectures inspired by a fixed number of iterations of an optimization method.
This paper describes an alternative approach corresponding to an em infinite number of iterations, yielding up to a 4dB PSNR improvement in reconstruction accuracy.
arXiv Detail & Related papers (2021-02-16T03:49:58Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.