MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models
- URL: http://arxiv.org/abs/2004.08227v1
- Date: Thu, 16 Apr 2020 16:20:53 GMT
- Title: MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models
- Authors: Siddharth Tourani, Alexander Shekhovtsov, Carsten Rother, Bogdan
Savchynskyy
- Abstract summary: This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle.
Surprisingly, by making a small change to the low-performing solver, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin.
- Score: 96.1052289276254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense, discrete Graphical Models with pairwise potentials are a powerful
class of models which are employed in state-of-the-art computer vision and
bio-imaging applications. This work introduces a new MAP-solver, based on the
popular Dual Block-Coordinate Ascent principle. Surprisingly, by making a small
change to the low-performing solver, the Max Product Linear Programming (MPLP)
algorithm, we derive the new solver MPLP++ that significantly outperforms all
existing solvers by a large margin, including the state-of-the-art solver
Tree-Reweighted Sequential (TRWS) message-passing algorithm. Additionally, our
solver is highly parallel, in contrast to TRWS, which gives a further boost in
performance with the proposed GPU and multi-thread CPU implementations. We
verify the superiority of our algorithm on dense problems from publicly
available benchmarks, as well, as a new benchmark for 6D Object Pose
estimation. We also provide an ablation study with respect to graph density.
Related papers
- CLAP: Concave Linear APproximation for Quadratic Graph Matching [5.417323487240968]
We introduce a linear model and designed a novel approximation matrix for graph matching.
We then transform the original QAP into a linear model that is concave for the structural constraint.
This model can be solved using the Sinkhorn optimal transport algorithm.
arXiv Detail & Related papers (2024-10-22T15:28:18Z) - Maximum Independent Set: Self-Training through Dynamic Programming [56.670639478539485]
This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP)
We propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursion call.
Annotating comparisons of different graphs concerning their MIS size leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa.
arXiv Detail & Related papers (2023-10-28T10:58:25Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - ParaGraph: Weighted Graph Representation for Performance Optimization of
HPC Kernels [1.304892050913381]
We introduce a new graph-based program representation for parallel applications that extends the Abstract Syntax Tree.
We evaluate our proposed representation by training a Graph Neural Network (GNN) to predict the runtime of an OpenMP code region.
Results show that our approach is indeed effective and has normalized RMSE as low as 0.004 to at most 0.01 in its runtime predictions.
arXiv Detail & Related papers (2023-04-07T05:52:59Z) - FastDOG: Fast Discrete Optimization on GPU [23.281726932718232]
We present a massively parallel Lagrange decomposition method for solving 0-1 integer linear programs occurring in structured prediction.
Our primal and dual algorithms require little synchronization between subproblems and optimization over BDDs.
We come close to or outperform some state-of-the-art specialized algorithms while being problem agnostic.
arXiv Detail & Related papers (2021-11-19T15:20:10Z) - Hybrid Models for Learning to Branch [81.93868699246214]
We propose a new hybrid architecture for efficient branching on CPU machines.
The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching.
arXiv Detail & Related papers (2020-06-26T21:03:45Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Heterogeneous CPU+GPU Stochastic Gradient Descent Algorithms [1.3249453757295084]
We study training algorithms for deep learning on heterogeneous CPU+GPU architectures.
Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging.
We show that the implementation of these algorithms achieves both faster convergence and higher resource utilization than on several real datasets.
arXiv Detail & Related papers (2020-04-19T05:21:20Z) - Taxonomy of Dual Block-Coordinate Ascent Methods for Discrete Energy
Minimization [96.1052289276254]
We consider the maximum-a-posteriori inference problem in discrete graphical models and study solvers based on the dual block-coordinate ascent rule.
We map all existing solvers in a single framework, allowing for a better understanding of their design principles.
arXiv Detail & Related papers (2020-04-16T15:49:13Z) - GLSearch: Maximum Common Subgraph Detection via Learning to Search [33.9052190473029]
We propose GLSearch, a Graph Neural Network (GNN) based learning to search model.
Our model is built upon the branch and bound bound, which selects one pair of nodes from the two input graphs to expand at a time.
Our GLSearch can be potentially extended to solve many other problems with constraints on graphs.
arXiv Detail & Related papers (2020-02-08T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.