Non-approximate Inference for Collective Graphical Models on Path Graphs
via Discrete Difference of Convex Algorithm
- URL: http://arxiv.org/abs/2102.09191v1
- Date: Thu, 18 Feb 2021 07:28:18 GMT
- Title: Non-approximate Inference for Collective Graphical Models on Path Graphs
via Discrete Difference of Convex Algorithm
- Authors: Yasunori Akagi, Naoki Marumo, Hideaki Kim, Takeshi Kurashima and
Hiroyuki Toda
- Abstract summary: This paper proposes a new method for MAP inference for Collective Graphical Model (CGM) on path graphs.
First we show that the MAP inference problem can be formulated as a (non-linear) minimum cost flow problem.
In our algorithm, important subroutines in DCA can be efficiently calculated by minimum convex cost flow algorithms.
- Score: 19.987509826212115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The importance of aggregated count data, which is calculated from the data of
multiple individuals, continues to increase. Collective Graphical Model (CGM)
is a probabilistic approach to the analysis of aggregated data. One of the most
important operations in CGM is maximum a posteriori (MAP) inference of
unobserved variables under given observations. Because the MAP inference
problem for general CGMs has been shown to be NP-hard, an approach that solves
an approximate problem has been proposed. However, this approach has two major
drawbacks. First, the quality of the solution deteriorates when the values in
the count tables are small, because the approximation becomes inaccurate.
Second, since continuous relaxation is applied, the integrality constraints of
the output are violated. To resolve these problems, this paper proposes a new
method for MAP inference for CGMs on path graphs. First we show that the MAP
inference problem can be formulated as a (non-linear) minimum cost flow
problem. Then, we apply Difference of Convex Algorithm (DCA), which is a
general methodology to minimize a function represented as the sum of a convex
function and a concave function. In our algorithm, important subroutines in DCA
can be efficiently calculated by minimum convex cost flow algorithms.
Experiments show that the proposed method outputs higher quality solutions than
the conventional approach.
Related papers
- Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Mean Field Approximation for solving QUBO problems [0.0]
We show that the statistical physics approach and the quantum mechanical approach in the mean field annealing give the same result.
Our methods consist of a set of simple gradient-based minimizations with continuous variables, thus easy to simulate.
arXiv Detail & Related papers (2021-06-06T20:35:28Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems [2.335152769484957]
We derive and detail a theoretical analysis of an efficient consensus algorithm based on gradient proximal (PG) approach.
The proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
arXiv Detail & Related papers (2020-11-19T20:52:48Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z) - Approximate MMAP by Marginal Search [78.50747042819503]
We present a strategy for marginal MAP queries in graphical models.
The proposed confidence measure is properly detecting instances for which the algorithm is accurate.
For sufficiently high confidence levels, the algorithm gives the exact solution or an approximation whose Hamming distance from the exact one is small.
arXiv Detail & Related papers (2020-02-12T07:41:13Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.