Generative Plug and Play: Posterior Sampling for Inverse Problems
- URL: http://arxiv.org/abs/2306.07233v1
- Date: Mon, 12 Jun 2023 16:49:08 GMT
- Title: Generative Plug and Play: Posterior Sampling for Inverse Problems
- Authors: Charles A. Bouman and Gregery T. Buzzard
- Abstract summary: Plug-Play (and) has become a popular method for reconstructing images using a framework consisting of a forward and prior model.
We present experimental simulations using the well-known BM3D denoiser.
- Score: 4.417934991211913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decade, Plug-and-Play (PnP) has become a popular method for
reconstructing images using a modular framework consisting of a forward and
prior model. The great strength of PnP is that an image denoiser can be used as
a prior model while the forward model can be implemented using more traditional
physics-based approaches. However, a limitation of PnP is that it reconstructs
only a single deterministic image.
In this paper, we introduce Generative Plug-and-Play (GPnP), a generalization
of PnP to sample from the posterior distribution. As with PnP, GPnP has a
modular framework using a physics-based forward model and an image denoising
prior model. However, in GPnP these models are extended to become proximal
generators, which sample from associated distributions. GPnP applies these
proximal generators in alternation to produce samples from the posterior. We
present experimental simulations using the well-known BM3D denoiser. Our
results demonstrate that the GPnP method is robust, easy to implement, and
produces intuitively reasonable samples from the posterior for sparse
interpolation and tomographic reconstruction. Code to accompany this paper is
available at https://github.com/gbuzzard/generative-pnp-allerton .
Related papers
- Plug-and-Play Priors as a Score-Based Method [10.533522753705599]
Plug-and-play (pn) methods are extensively used for solving inverse problems by integrating physical measurement models with pre-trained deep denoisers as priors.
Score-based diffusion models (SBMs) have recently emerged as a powerful framework for image generation by deep deep denoisers to represent the score of the image prior.
arXiv Detail & Related papers (2024-12-15T08:10:39Z) - Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Variational Positive-incentive Noise: How Noise Benefits Models [84.67629229767047]
We investigate how to benefit the classical models by random noise under the framework of Positive-incentive Noise (Pi-Noise)
Since the ideal objective of Pi-Noise is intractable, we propose to optimize its variational bound instead, namely variational Pi-Noise (VPN)
arXiv Detail & Related papers (2023-06-13T09:43:32Z) - Plug-and-Play Deep Energy Model for Inverse problems [18.047694351309204]
We introduce a novel energy formulation for Plug- and-Play (CNN) image recovery.
The proposed model offers algorithms with convergence guarantees, even when the learned score model is not a contraction model.
arXiv Detail & Related papers (2023-02-15T09:44:45Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Non-Gaussian Process Regression [0.0]
We extend the GP framework into a new class of time-changed GPs that allow for straightforward modelling of heavy-tailed non-Gaussian behaviours.
We present Markov chain Monte Carlo inference procedures for this model and demonstrate the potential benefits.
arXiv Detail & Related papers (2022-09-07T13:08:22Z) - REPNP: Plug-and-Play with Deep Reinforcement Learning Prior for Robust
Image Restoration [30.966005373669027]
We propose a novel deep reinforcement learning (DRL) based framework dubbed RePNP.
Results demonstrate that the proposed RePNP is robust to the observation model used in the.
scheme dubbed RePNP.
RePNP achieves better results subjective to model deviation with fewer model parameters.
arXiv Detail & Related papers (2022-07-25T10:56:10Z) - On Maximum-a-Posteriori estimation with Plug & Play priors and
stochastic gradient descent [13.168923974530307]
Methods to solve imaging problems usually combine an explicit data likelihood function with a prior that explicitly expected properties of the solution.
In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm.
arXiv Detail & Related papers (2022-01-16T20:50:08Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - Towards a Neural Graphics Pipeline for Controllable Image Generation [96.11791992084551]
We present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models.
NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation.
We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes.
arXiv Detail & Related papers (2020-06-18T14:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.