NF-ULA: Langevin Monte Carlo with Normalizing Flow Prior for Imaging
Inverse Problems
- URL: http://arxiv.org/abs/2304.08342v2
- Date: Sat, 14 Oct 2023 16:46:08 GMT
- Title: NF-ULA: Langevin Monte Carlo with Normalizing Flow Prior for Imaging
Inverse Problems
- Authors: Ziruo Cai, Junqi Tang, Subhadip Mukherjee, Jinglai Li, Carola Bibiane
Sch\"onlieb, Xiaoqun Zhang
- Abstract summary: We introduce NF-ULA (Normalizing Flow-based Unadjusted Langevin algorithm), which involves learning a normalizing flow (NF) as the image prior.
NF-ULA is found to perform better than competing methods for severely ill-posed inverse problems.
- Score: 7.38079566297881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian methods for solving inverse problems are a powerful alternative to
classical methods since the Bayesian approach offers the ability to quantify
the uncertainty in the solution. In recent years, data-driven techniques for
solving inverse problems have also been remarkably successful, due to their
superior representation ability. In this work, we incorporate data-based models
into a class of Langevin-based sampling algorithms for Bayesian inference in
imaging inverse problems. In particular, we introduce NF-ULA (Normalizing
Flow-based Unadjusted Langevin algorithm), which involves learning a
normalizing flow (NF) as the image prior. We use NF to learn the prior because
a tractable closed-form expression for the log prior enables the
differentiation of it using autograd libraries. Our algorithm only requires a
normalizing flow-based generative network, which can be pre-trained
independently of the considered inverse problem and the forward operator. We
perform theoretical analysis by investigating the well-posedness and
non-asymptotic convergence of the resulting NF-ULA algorithm. The efficacy of
the proposed NF-ULA algorithm is demonstrated in various image restoration
problems such as image deblurring, image inpainting, and limited-angle X-ray
computed tomography (CT) reconstruction. NF-ULA is found to perform better than
competing methods for severely ill-posed inverse problems.
Related papers
- Can Diffusion Models Provide Rigorous Uncertainty Quantification for Bayesian Inverse Problems? [0.0]
In this work, we introduce a new framework, for diffusion model based posterior sampling, called BIPSDA.
The framework unifies several recently proposed diffusion model based posterior sampling algorithms and contains novel algorithms that can be realized through flexible combinations of design choices.
The results demonstrate that BIPSDA algorithms can provide strong performance on the image inpainting and x-ray tomography based problems.
arXiv Detail & Related papers (2025-03-04T21:07:15Z) - MAP-based Problem-Agnostic diffusion model for Inverse Problems [8.161067848524976]
We propose a problem-agnostic diffusion model called the maximum a posteriori (MAP)-based guided term estimation method for inverse problems.
This innovation allows us to better capture the intrinsic properties of the data, leading to improved performance.
arXiv Detail & Related papers (2025-01-25T08:30:15Z) - Solving Inverse Problems via Diffusion Optimal Control [3.0079490585515343]
We derive a diffusion-based optimal controller inspired by the iterative Linear Quadratic Regulator (iLQR) algorithm.
We show that the idealized posterior sampling equation can be recovered as a special case of our algorithm.
We then evaluate our method against a selection of neural inverse problem solvers, and establish a new baseline in image reconstruction with inverse problems.
arXiv Detail & Related papers (2024-12-21T19:47:06Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Poisson-Gaussian Holographic Phase Retrieval with Score-based Image
Prior [19.231581775644617]
We propose a new algorithm called "AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior.
We calculate the gradient of the log-likelihood function for PR and determine the Lipschitz constant.
We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed algorithm.
arXiv Detail & Related papers (2023-05-12T18:08:47Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - On Measuring and Controlling the Spectral Bias of the Deep Image Prior [63.88575598930554]
The deep image prior has demonstrated the remarkable ability that untrained networks can address inverse imaging problems.
It requires an oracle to determine when to stop the optimization as the performance degrades after reaching a peak.
We study the deep image prior from a spectral bias perspective to address these problems.
arXiv Detail & Related papers (2021-07-02T15:10:42Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Denoising Score-Matching for Uncertainty Quantification in Inverse
Problems [1.521936393554569]
We propose a generic Bayesian framework forsolving inverse problems, in which we limit the use of deep neural networks tolearning a prior distribution on the signals to recover.
We apply this framework to Magnetic ResonanceImage (MRI) reconstruction and illustrate how this approach can also be used to assess the uncertainty on particularfeatures of a reconstructed image.
arXiv Detail & Related papers (2020-11-16T18:33:06Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.