Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance
- URL: http://arxiv.org/abs/2402.02149v2
- Date: Sun, 2 Jun 2024 10:39:45 GMT
- Title: Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance
- Authors: Xinyu Peng, Ziyang Zheng, Wenrui Dai, Nuoqian Xiao, Chenglin Li, Junni Zou, Hongkai Xiong,
- Abstract summary: Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
- Score: 52.093434664236014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we reveal that recent methods can be uniformly interpreted as employing a Gaussian approximation with hand-crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. To achieve posterior covariance optimization without retraining, we provide general plug-and-play solutions based on two approaches specifically designed for leveraging pre-trained models with and without reverse covariance. We further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. Experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.
Related papers
- Score-Based Variational Inference for Inverse Problems [19.848238197979157]
In applications that posterior mean is preferred, we have to generate multiple samples from the posterior which is time-consuming.
We establish a framework termed reverse mean propagation (RMP) that targets the posterior mean directly.
We develop an algorithm that optimize the reverse KL divergence with natural gradient descent using score functions and propagates the mean at each reverse step.
arXiv Detail & Related papers (2024-10-08T02:55:16Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
We propose a variational inference approach to sample from the posterior distribution for solving inverse problems.
We show that our method is applicable to standard signals in Euclidean space, as well as signals on manifold.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Fast Scalable Image Restoration using Total Variation Priors and
Expectation Propagation [7.7731951589289565]
This paper presents a scalable approximate Bayesian method for image restoration using total variation (TV) priors.
We use the expectation propagation (EP) framework to approximate minimum mean squared error (MMSE) estimators and marginal (pixel-wise) variances.
arXiv Detail & Related papers (2021-10-04T17:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.