Provably Convergent Algorithms for Solving Inverse Problems Using
Generative Models
- URL: http://arxiv.org/abs/2105.06371v1
- Date: Thu, 13 May 2021 15:58:27 GMT
- Title: Provably Convergent Algorithms for Solving Inverse Problems Using
Generative Models
- Authors: Viraj Shah, Rakib Hyder, M. Salman Asif, Chinmay Hegde
- Abstract summary: We study the use of generative models in inverse problems with more complete understanding.
We support our claims with experimental results for solving various inverse problems.
We propose an extension of our approach that can handle model mismatch (i.e., situations where the generative prior is not exactly applicable)
- Score: 47.208080968675574
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The traditional approach of hand-crafting priors (such as sparsity) for
solving inverse problems is slowly being replaced by the use of richer learned
priors (such as those modeled by deep generative networks). In this work, we
study the algorithmic aspects of such a learning-based approach from a
theoretical perspective. For certain generative network architectures, we
establish a simple non-convex algorithmic approach that (a) theoretically
enjoys linear convergence guarantees for certain linear and nonlinear inverse
problems, and (b) empirically improves upon conventional techniques such as
back-propagation. We support our claims with the experimental results for
solving various inverse problems. We also propose an extension of our approach
that can handle model mismatch (i.e., situations where the generative network
prior is not exactly applicable). Together, our contributions serve as building
blocks towards a principled use of generative models in inverse problems with
more complete algorithmic understanding.
Related papers
- Sparse Bayesian Generative Modeling for Compressive Sensing [8.666730973498625]
This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior.
We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.
arXiv Detail & Related papers (2024-11-14T14:37:47Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - JPEG Artifact Correction using Denoising Diffusion Restoration Models [110.1244240726802]
We build upon Denoising Diffusion Restoration Models (DDRM) and propose a method for solving some non-linear inverse problems.
We leverage the pseudo-inverse operator used in DDRM and generalize this concept for other measurement operators.
arXiv Detail & Related papers (2022-09-23T23:47:00Z) - Regularized Training of Intermediate Layers for Generative Models for
Inverse Problems [9.577509224534323]
We introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers.
We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior.
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios.
arXiv Detail & Related papers (2022-03-08T20:30:49Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Model-Aware Regularization For Learning Approaches To Inverse Problems [11.314492463814817]
We provide an analysis of the generalisation error of deep learning methods applicable to inverse problems.
We propose a 'plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network.
We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches.
arXiv Detail & Related papers (2020-06-18T21:59:03Z) - Regularization of Inverse Problems by Neural Networks [0.0]
Inverse problems arise in a variety of imaging applications including computed tomography, non-destructive testing, and remote sensing.
The characteristic features of inverse problems are the non-uniqueness and instability of their solutions.
Deep learning techniques and neural networks demonstrated to significantly outperform classical solution methods for inverse problems.
arXiv Detail & Related papers (2020-06-06T20:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.