Plug-and-Play Priors as a Score-Based Method
- URL: http://arxiv.org/abs/2412.11108v1
- Date: Sun, 15 Dec 2024 08:10:39 GMT
- Title: Plug-and-Play Priors as a Score-Based Method
- Authors: Chicago Y. Park, Yuyang Hu, Michael T. McCann, Cristina Garcia-Cardona, Brendt Wohlberg, Ulugbek S. Kamilov,
- Abstract summary: Plug-and-play (pn) methods are extensively used for solving inverse problems by integrating physical measurement models with pre-trained deep denoisers as priors.
Score-based diffusion models (SBMs) have recently emerged as a powerful framework for image generation by deep deep denoisers to represent the score of the image prior.
- Score: 10.533522753705599
- License:
- Abstract: Plug-and-play (PnP) methods are extensively used for solving imaging inverse problems by integrating physical measurement models with pre-trained deep denoisers as priors. Score-based diffusion models (SBMs) have recently emerged as a powerful framework for image generation by training deep denoisers to represent the score of the image prior. While both PnP and SBMs use deep denoisers, the score-based nature of PnP is unexplored in the literature due to its distinct origins rooted in proximal optimization. This letter introduces a novel view of PnP as a score-based method, a perspective that enables the re-use of powerful SBMs within classical PnP algorithms without retraining. We present a set of mathematical relationships for adapting popular SBMs as priors within PnP. We show that this approach enables a direct comparison between PnP and SBM-based reconstruction methods using the same neural network as the prior. Code is available at https://github.com/wustl-cig/score_pnp.
Related papers
- PnP-Flow: Plug-and-Play Image Restoration with Flow Matching [2.749898166276854]
We introduce Plug-and-Play Flow Matching, an algorithm for solving inverse imaging problems.
We evaluate its performance on denoising, superresolution, and inpainting tasks.
arXiv Detail & Related papers (2024-10-03T12:13:56Z) - Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence
Analysis [20.63188897629508]
Plug-and-Play priors is a widely-used family methods for solving inverse imaging problems.
Deep methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful denoisers.
arXiv Detail & Related papers (2023-09-29T20:49:00Z) - Generative Plug and Play: Posterior Sampling for Inverse Problems [4.417934991211913]
Plug-Play (and) has become a popular method for reconstructing images using a framework consisting of a forward and prior model.
We present experimental simulations using the well-known BM3D denoiser.
arXiv Detail & Related papers (2023-06-12T16:49:08Z) - On Maximum-a-Posteriori estimation with Plug & Play priors and
stochastic gradient descent [13.168923974530307]
Methods to solve imaging problems usually combine an explicit data likelihood function with a prior that explicitly expected properties of the solution.
In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm.
arXiv Detail & Related papers (2022-01-16T20:50:08Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - Recovery Analysis for Plug-and-Play Priors using the Restricted
Eigenvalue Condition [48.08511796234349]
We show how to establish theoretical recovery guarantees for the plug-and-play priors (noise) and regularization by denoising (RED) methods.
Our results suggest that models with a pre-trained artifact removal network provides significantly better results compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-07T14:45:38Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.