Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence
Analysis
- URL: http://arxiv.org/abs/2310.00133v1
- Date: Fri, 29 Sep 2023 20:49:00 GMT
- Title: Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence
Analysis
- Authors: Shirin Shoushtari, Jiaming Liu, Edward P. Chandler, M. Salman Asif,
Ulugbek S. Kamilov
- Abstract summary: Plug-and-Play priors is a widely-used family methods for solving inverse imaging problems.
Deep methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful denoisers.
- Score: 20.63188897629508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Plug-and-Play (PnP) priors is a widely-used family of methods for solving
imaging inverse problems by integrating physical measurement models with image
priors specified using image denoisers. PnP methods have been shown to achieve
state-of-the-art performance when the prior is obtained using powerful deep
denoisers. Despite extensive work on PnP, the topic of distribution mismatch
between the training and testing data has often been overlooked in the PnP
literature. This paper presents a set of new theoretical and numerical results
on the topic of prior distribution mismatch and domain adaptation for
alternating direction method of multipliers (ADMM) variant of PnP. Our
theoretical result provides an explicit error bound for PnP-ADMM due to the
mismatch between the desired denoiser and the one used for inference. Our
analysis contributes to the work in the area by considering the mismatch under
nonconvex data-fidelity terms and expansive denoisers. Our first set of
numerical results quantifies the impact of the prior distribution mismatch on
the performance of PnP-ADMM on the problem of image super-resolution. Our
second set of numerical results considers a simple and effective domain
adaption strategy that closes the performance gap due to the use of mismatched
denoisers. Our results suggest the relative robustness of PnP-ADMM to prior
distribution mismatch, while also showing that the performance gap can be
significantly reduced with few training samples from the desired distribution.
Related papers
- Convergence of Nonconvex PnP-ADMM with MMSE Denoisers [8.034511587847158]
Plug-and-Play Alternating Direction Method of Multipliers (ADMM) is widely-used for physical measurement problems.
It has however been observed that.
ADMM often empirically converges even for expansive CNNs.
arXiv Detail & Related papers (2023-11-30T18:52:47Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models [15.128123938848882]
Posterior sampling has been shown to be a powerful Bayesian approach for solving inverse problems.
Recent plug-and-play un-play Langevin algorithm has emerged as a promising method for Monte Carlo sampling.
arXiv Detail & Related papers (2023-10-05T13:57:53Z) - On the Contractivity of Plug-and-Play Operators [11.218821754886514]
In noise-and-play regularization, the operator in algorithms such as ISTA and ADM is replaced by a powerfulrr.
This formal substitution works surprisingly well in practice.
In fact,.
has been shown to give state-of-the-art results for various imaging applications.
arXiv Detail & Related papers (2023-09-28T23:58:02Z) - Optimality Guarantees for Particle Belief Approximation of POMDPs [55.83001584645448]
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems.
POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid.
We propose a theory characterizing the approximation error of the particle filtering techniques that these algorithms use.
arXiv Detail & Related papers (2022-10-10T21:11:55Z) - On Maximum-a-Posteriori estimation with Plug & Play priors and
stochastic gradient descent [13.168923974530307]
Methods to solve imaging problems usually combine an explicit data likelihood function with a prior that explicitly expected properties of the solution.
In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm.
arXiv Detail & Related papers (2022-01-16T20:50:08Z) - Recovery Analysis for Plug-and-Play Priors using the Restricted
Eigenvalue Condition [48.08511796234349]
We show how to establish theoretical recovery guarantees for the plug-and-play priors (noise) and regularization by denoising (RED) methods.
Our results suggest that models with a pre-trained artifact removal network provides significantly better results compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-07T14:45:38Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - A Contrastive Learning Approach for Training Variational Autoencoder
Priors [137.62674958536712]
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains.
One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior.
We propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior.
arXiv Detail & Related papers (2020-10-06T17:59:02Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.