Taxonomy of Dual Block-Coordinate Ascent Methods for Discrete Energy
Minimization
- URL: http://arxiv.org/abs/2004.07715v1
- Date: Thu, 16 Apr 2020 15:49:13 GMT
- Title: Taxonomy of Dual Block-Coordinate Ascent Methods for Discrete Energy
Minimization
- Authors: Siddharth Tourani, Alexander Shekhovtsov, Carsten Rother, Bogdan
Savchynskyy
- Abstract summary: We consider the maximum-a-posteriori inference problem in discrete graphical models and study solvers based on the dual block-coordinate ascent rule.
We map all existing solvers in a single framework, allowing for a better understanding of their design principles.
- Score: 96.1052289276254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the maximum-a-posteriori inference problem in discrete graphical
models and study solvers based on the dual block-coordinate ascent rule. We map
all existing solvers in a single framework, allowing for a better understanding
of their design principles. We theoretically show that some block-optimizing
updates are sub-optimal and how to strictly improve them. On a wide range of
problem instances of varying graph connectivity, we study the performance of
existing solvers as well as new variants that can be obtained within the
framework. As a result of this exploration we build a new state-of-the art
solver, performing uniformly better on the whole range of test instances.
Related papers
- Neural Multi-Objective Combinatorial Optimization with Diversity
Enhancement [33.28756549307025]
We propose a novel neural with diversity enhancement (NHDE) for multi-objective optimization (MOCO) problems.
We show that our NHDE is able to generate a Pareto front with higher diversity, thereby achieving superior overall performance.
Our NHDE is generic and can be applied to different neural methods for MOCO.
arXiv Detail & Related papers (2023-10-22T08:50:57Z) - Best-Effort Adaptation [62.00856290846247]
We present a new theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights.
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
We report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms.
arXiv Detail & Related papers (2023-05-10T00:09:07Z) - MultiZenoTravel: a Tunable Benchmark for Multi-Objective Planning with
Known Pareto Front [71.19090689055054]
Multi-objective AI planning suffers from a lack of benchmarks exhibiting known Pareto Fronts.
We propose a tunable benchmark generator, together with a dedicated solver that provably computes the true Pareto front of the resulting instances.
We show how to characterize the optimal plans for a constrained version of the problem, and then show how to reduce the general problem to the constrained one.
arXiv Detail & Related papers (2023-04-28T07:09:23Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - On Constrained Optimization in Differentiable Neural Architecture Search [3.0682439731292592]
Differentiable Architecture Search (DARTS) is a recently proposed neural architecture search (NAS) method based on a differentiable relaxation.
We propose and analyze three improvements to architectural weight competition, update scheduling, and regularization towards discretization.
arXiv Detail & Related papers (2021-06-22T10:22:31Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Reversible Action Design for Combinatorial Optimization with
Reinforcement Learning [35.50454156611722]
Reinforcement learning (RL) has recently emerged as a new framework to tackle these problems.
We propose a general RL framework that not only exhibits state-of-the-art empirical performance but also generalizes to a variety class of COPs.
arXiv Detail & Related papers (2021-02-14T18:05:42Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models [96.1052289276254]
This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle.
Surprisingly, by making a small change to the low-performing solver, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin.
arXiv Detail & Related papers (2020-04-16T16:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.