Connecting Hamilton--Jacobi partial differential equations with maximum
a posteriori and posterior mean estimators for some non-convex priors
- URL: http://arxiv.org/abs/2104.11285v1
- Date: Thu, 22 Apr 2021 19:00:37 GMT
- Title: Connecting Hamilton--Jacobi partial differential equations with maximum
a posteriori and posterior mean estimators for some non-convex priors
- Authors: J\'er\^ome Darbon and Gabriel P. Langlois and Tingwei Meng
- Abstract summary: In this chapter, we consider a certain class non-log-concave regularizations and show that similar representation formulas for the minimizer can also be obtained.
We also present similar results for certain Bayesian posterior mean estimators with Gaussian data fidelity and certain non-log-concave priors using an analogue of min-plus algebra techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many imaging problems can be formulated as inverse problems expressed as
finite-dimensional optimization problems. These optimization problems generally
consist of minimizing the sum of a data fidelity and regularization terms. In
[23,26], connections between these optimization problems and (multi-time)
Hamilton--Jacobi partial differential equations have been proposed under the
convexity assumptions of both the data fidelity and regularization terms. In
particular, under these convexity assumptions, some representation formulas for
a minimizer can be obtained. From a Bayesian perspective, such a minimizer can
be seen as a maximum a posteriori estimator. In this chapter, we consider a
certain class of non-convex regularizations and show that similar
representation formulas for the minimizer can also be obtained. This is
achieved by leveraging min-plus algebra techniques that have been originally
developed for solving certain Hamilton--Jacobi partial differential equations
arising in optimal control. Note that connections between viscous
Hamilton--Jacobi partial differential equations and Bayesian posterior mean
estimators with Gaussian data fidelity terms and log-concave priors have been
highlighted in [25]. We also present similar results for certain Bayesian
posterior mean estimators with Gaussian data fidelity and certain
non-log-concave priors using an analogue of min-plus algebra techniques.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Sum-of-Squares Relaxations for Information Theory and Variational
Inference [0.0]
We consider extensions of the Shannon relative entropy, referred to as $f$-divergences.
We derive a sequence of convex relaxations for computing these divergences.
We provide more efficient relaxations based on spectral information divergences from quantum information theory.
arXiv Detail & Related papers (2022-06-27T13:22:40Z) - A Unified View of Stochastic Hamiltonian Sampling [18.300078015845262]
This work revisits the theoretical properties of Hamiltonian differential equations (SDEs) for posterior sampling.
We study the two types of errors that arise from numerical SDE simulation: the discretization error and the error due to noisy gradient estimates.
arXiv Detail & Related papers (2021-06-30T16:50:11Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Ideal formulations for constrained convex optimization problems with
indicator variables [2.578242050187029]
We study the convexification of a class of convex optimization problems with indicator variables and constraints on the indicators.
Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear non-separable objective, indicator variables, and constraints.
We derive ideal convexifications for problems with hierarchy, multi-collinearity, and sparsity constraints.
arXiv Detail & Related papers (2020-06-30T21:07:10Z) - Bayesian ODE Solvers: The Maximum A Posteriori Estimate [30.767328732475956]
It has been established that the numerical solution of ordinary differential equations can be posed as a nonlinear Bayesian inference problem.
The maximum a posteriori estimate corresponds to an optimal interpolant in the Hilbert space associated with the prior.
The methodology developed provides a novel and more natural approach to study the convergence of these estimators.
arXiv Detail & Related papers (2020-04-01T11:39:59Z) - Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion
and Strong Solutions to Variational Inequalities [14.848525762485872]
We leverage the connections between nonexpansive maps, monotone Lipschitz operators, and proximal mappings to obtain near-optimal solutions to monotone inclusion problems.
These results translate into near-optimal guarantees for approximating strong solutions to variational inequality problems, approximating convex-concave min-max optimization problems, and minimizing the norm of the gradient in min-max optimization problems.
arXiv Detail & Related papers (2020-02-20T17:12:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.