Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields
- URL: http://arxiv.org/abs/2107.06028v1
- Date: Tue, 13 Jul 2021 12:31:06 GMT
- Title: Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields
- Authors: Hartmut Bauermeister and Emanuel Laude and Thomas M\"ollenhoff and
Michael Moeller and Daniel Cremers
- Abstract summary: We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
- Score: 53.31927549039624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dual decomposition approaches in nonconvex optimization may suffer from a
duality gap. This poses a challenge when applying them directly to nonconvex
problems such as MAP-inference in a Markov random field (MRF) with continuous
state spaces. To eliminate such gaps, this paper considers a reformulation of
the original nonconvex task in the space of measures. This infinite-dimensional
reformulation is then approximated by a semi-infinite one, which is obtained
via a piecewise polynomial discretization in the dual. We provide a geometric
intuition behind the primal problem induced by the dual discretization and draw
connections to optimization over moment spaces. In contrast to existing
discretizations which suffer from a grid bias, we show that a piecewise
polynomial discretization better preserves the continuous nature of our
problem. Invoking results from optimal transport theory and convex algebraic
geometry we reduce the semi-infinite program to a finite one and provide a
practical implementation based on semidefinite programming. We show,
experimentally and in theory, that the approach successfully reduces the
duality gap. To showcase the scalability of our approach, we apply it to the
stereo matching problem between two images.
Related papers
- Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Extragradient Type Methods for Riemannian Variational Inequality Problems [25.574847201669144]
We show that the average-rate convergence of both REG and RPEG is $Oleft(frac1Tright)$, aligning with in the 2020ean case.
Results are enabled by addressing judiciously the holonomy effect so additional observations can be reduced.
arXiv Detail & Related papers (2023-09-25T14:08:02Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Non-Convex Exact Community Recovery in Stochastic Block Model [31.221745716673546]
Community detection in graphs that are generated according to symmetric block models (SBMs) has received much attention lately.
We show that in the logarithmic sparsity regime of the problem, with high probability the proposed two-stage method can exactly recover the two communities down to the information-theoretic limit in $mathcalO(nlog2n/loglog n)$ time.
We also conduct numerical experiments on both synthetic and real data sets to demonstrate the efficacy of our proposed method and complement our theoretical development.
arXiv Detail & Related papers (2020-06-29T07:03:27Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.