Solving Inverse Problems via Diffusion-Based Priors: An Approximation-Free Ensemble Sampling Approach
- URL: http://arxiv.org/abs/2506.03979v2
- Date: Thu, 05 Jun 2025 04:27:46 GMT
- Title: Solving Inverse Problems via Diffusion-Based Priors: An Approximation-Free Ensemble Sampling Approach
- Authors: Haoxuan Chen, Yinuo Ren, Martin Renqiang Min, Lexing Ying, Zachary Izzo,
- Abstract summary: Current DM-based posterior sampling methods rely on approximations to the generative process.<n>We propose an ensemble-based algorithm that performs posterior sampling without the use of approximations.<n>Our algorithm is motivated by existing works that combine DM-based methods with the sequential Monte Carlo method.
- Score: 19.860268382547357
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models (DMs) have proven to be effective in modeling high-dimensional distributions, leading to their widespread adoption for representing complex priors in Bayesian inverse problems (BIPs). However, current DM-based posterior sampling methods proposed for solving common BIPs rely on heuristic approximations to the generative process. To exploit the generative capability of DMs and avoid the usage of such approximations, we propose an ensemble-based algorithm that performs posterior sampling without the use of heuristic approximations. Our algorithm is motivated by existing works that combine DM-based methods with the sequential Monte Carlo (SMC) method. By examining how the prior evolves through the diffusion process encoded by the pre-trained score function, we derive a modified partial differential equation (PDE) governing the evolution of the corresponding posterior distribution. This PDE includes a modified diffusion term and a reweighting term, which can be simulated via stochastic weighted particle methods. Theoretically, we prove that the error between the true posterior distribution can be bounded in terms of the training error of the pre-trained score function and the number of particles in the ensemble. Empirically, we validate our algorithm on several inverse problems in imaging to show that our method gives more accurate reconstructions compared to existing DM-based methods.
Related papers
- EquiReg: Equivariance Regularized Diffusion for Inverse Problems [67.01847869495558]
We propose EquiReg diffusion, a framework for regularizing posterior sampling in diffusion-based inverse problem solvers.<n>When applied to a variety of solvers, EquiReg outperforms state-of-the-art diffusion models in both linear and nonlinear image restoration tasks.
arXiv Detail & Related papers (2025-05-29T01:25:43Z) - Geophysical inverse problems with measurement-guided diffusion models [0.4532517021515834]
I consider two sampling algorithms recently proposed under the name of Diffusion Posterior Sampling (DPS) and Pseudo-inverse Guided Diffusion Model (PGDM)<n>In DPS, the guidance term is obtained by applying the adjoint of the modeling operator to the residual obtained from a one-step denoising estimate of the solution.<n>On the other hand, PGDM utilizes a pseudo-inverse operator that originates from the fact that the one-step denoised solution is not assumed to be deterministic.
arXiv Detail & Related papers (2025-01-08T23:33:50Z) - Enhancing Diffusion Models for Inverse Problems with Covariance-Aware Posterior Sampling [3.866047645663101]
In computer vision, for example, tasks such as inpainting, deblurring, and super resolution can be effectively modeled as inverse problems.<n>DDPMs are shown to provide a promising solution to noisy linear inverse problems without the need for additional task specific training.
arXiv Detail & Related papers (2024-12-28T06:17:44Z) - Score-Based Variational Inference for Inverse Problems [19.848238197979157]
In applications that posterior mean is preferred, we have to generate multiple samples from the posterior which is time-consuming.
We establish a framework termed reverse mean propagation (RMP) that targets the posterior mean directly.
We develop an algorithm that optimize the reverse KL divergence with natural gradient descent using score functions and propagates the mean at each reverse step.
arXiv Detail & Related papers (2024-10-08T02:55:16Z) - Think Twice Before You Act: Improving Inverse Problem Solving With MCMC [40.5682961122897]
We propose textbfDiffusion textbfPosterior textbfMCMC (textbfDPMC) to solve inverse problems with pretrained diffusion models.
Our algorithm outperforms DPS with less number of evaluations across nearly all tasks, and is competitive among existing approaches.
arXiv Detail & Related papers (2024-09-13T06:10:54Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
We propose a variational inference approach to sample from the posterior distribution for solving inverse problems.
We show that our method is applicable to standard signals in Euclidean space, as well as signals on manifold.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Divide-and-Conquer Posterior Sampling for Denoising Diffusion Priors [21.0128625037708]
We present an innovative framework, divide-and-conquer posterior sampling.
It reduces the approximation error associated with current techniques without the need for retraining.
We demonstrate the versatility and effectiveness of our approach for a wide range of Bayesian inverse problems.
arXiv Detail & Related papers (2024-03-18T01:47:24Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.