Solving Inverse Problems by Joint Posterior Maximization with
Autoencoding Prior
- URL: http://arxiv.org/abs/2103.01648v1
- Date: Tue, 2 Mar 2021 11:18:34 GMT
- Title: Solving Inverse Problems by Joint Posterior Maximization with
Autoencoding Prior
- Authors: Mario Gonz\'alez, Andr\'es Almansa, Pauline Tan
- Abstract summary: We address the problem of solving ill-posed inverse problems in imaging where the prior is a JPal autoencoder (VAE)
We show that our technique is quite sufficient that it satisfies the proposed objective function.
Results also show the robustness of our approach to provide more robust estimates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we address the problem of solving ill-posed inverse problems in
imaging where the prior is a variational autoencoder (VAE). Specifically we
consider the decoupled case where the prior is trained once and can be reused
for many different log-concave degradation models without retraining. Whereas
previous MAP-based approaches to this problem lead to highly non-convex
optimization algorithms, our approach computes the joint (space-latent) MAP
that naturally leads to alternate optimization algorithms and to the use of a
stochastic encoder to accelerate computations. The resulting technique (JPMAP)
performs Joint Posterior Maximization using an Autoencoding Prior. We show
theoretical and experimental evidence that the proposed objective function is
quite close to bi-convex. Indeed it satisfies a weak bi-convexity property
which is sufficient to guarantee that our optimization scheme converges to a
stationary point. We also highlight the importance of correctly training the
VAE using a denoising criterion, in order to ensure that the encoder
generalizes well to out-of-distribution images, without affecting the quality
of the generative model. This simple modification is key to providing
robustness to the whole procedure. Finally we show how our joint MAP
methodology relates to more common MAP approaches, and we propose a
continuation scheme that makes use of our JPMAP algorithm to provide more
robust MAP estimates. Experimental results also show the higher quality of the
solutions obtained by our JPMAP approach with respect to other non-convex MAP
approaches which more often get stuck in spurious local optima.
Related papers
- Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching [35.77769905072651]
We propose an iterative algorithm to approximate the MAP estimator efficiently to solve a variety of linear inverse problems.
Our algorithm is mathematically justified by the observation that the MAP objective can be approximated by a sum of $N$ local MAP'' objectives.
We validate our approach for various linear inverse problems, such as super-resolution, deblurring, inpainting, and compressed sensing.
arXiv Detail & Related papers (2024-05-29T06:56:12Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - META-SMGO-$\Delta$: similarity as a prior in black-box optimization [1.282675419968047]
We propose to incorporate the META-learning rationale into SMGO-$Delta$, a global optimization approach recently proposed in the literature.
We show the practical benefits of our META-extension of the baseline algorithm, while providing theoretical bounds on its performance.
arXiv Detail & Related papers (2023-04-30T09:41:04Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic
Convex Optimization [4.970364068620608]
We exploit convexity and L-smoothness to improve the noisy estimates outputted by the gradient oracle.
We show that increasing the number and proximity of the queried points leads to better gradient estimates.
We also apply COCO in vanilla settings by plugging it in existing algorithms, such as SGD, Adam or STRSAGA.
arXiv Detail & Related papers (2021-09-07T17:21:09Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.