Exact MAP-Inference by Confining Combinatorial Search with LP Relaxation
- URL: http://arxiv.org/abs/2004.06370v1
- Date: Tue, 14 Apr 2020 09:10:47 GMT
- Title: Exact MAP-Inference by Confining Combinatorial Search with LP Relaxation
- Authors: Stefan Haller, Paul Swoboda, Bogdan Savchynskyy
- Abstract summary: We propose a family of relaxations, which naturally define lower bounds for its optimum.
This family always contains a tight relaxation and we give an algorithm able to find it and therefore, solve the initial non-relaxed NP-hard problem.
The relaxations we consider decompose the original problem into two non-overlapping parts: an easy LP-tight part and a difficult one.
- Score: 19.660527989370646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the MAP-inference problem for graphical models, which is a valued
constraint satisfaction problem defined on real numbers with a natural
summation operation. We propose a family of relaxations (different from the
famous Sherali-Adams hierarchy), which naturally define lower bounds for its
optimum. This family always contains a tight relaxation and we give an
algorithm able to find it and therefore, solve the initial non-relaxed NP-hard
problem.
The relaxations we consider decompose the original problem into two
non-overlapping parts: an easy LP-tight part and a difficult one. For the
latter part a combinatorial solver must be used. As we show in our experiments,
in a number of applications the second, difficult part constitutes only a small
fraction of the whole problem. This property allows to significantly reduce the
computational time of the combinatorial solver and therefore solve problems
which were out of reach before.
Related papers
- Don't Be Greedy, Just Relax! Pruning LLMs via Frank-Wolfe [61.68406997155879]
State-of-the-art Large Language Model (LLM) pruning methods operate layer-wise, minimizing the per-layer pruning error on a small dataset to avoid full retraining.<n>Existing methods hence rely on greedy convexs that ignore the weight interactions in the pruning objective.<n>Our method drastically reduces the per-layer pruning error, outperforms strong baselines on state-of-the-art GPT architectures, and remains memory-efficient.
arXiv Detail & Related papers (2025-10-15T16:13:44Z) - Improving Decision Trees through the Lens of Parameterized Local Search [9.426097667758627]
We study minimizing the number of classification errors by performing a fixed number of a single type of these operations.<n>We provide a proof-of-concept implementation of this algorithm and report on empirical results.
arXiv Detail & Related papers (2025-10-14T17:06:13Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.
Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.
We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - Feature selection in linear SVMs via hard cardinality constraint: a scalable SDP decomposition approach [3.7876216422538485]
We study the embedded feature selection problem in linear Support Vector Machines (SVMs)
A cardinality constraint is employed, leading to a fully explainable selection model.
The problem is NP-hard due to the presence of the cardinality constraint.
arXiv Detail & Related papers (2024-04-15T19:15:32Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - First-order Methods for Affinely Constrained Composite Non-convex
Non-smooth Problems: Lower Complexity Bound and Near-optimal Methods [23.948126336842634]
We make the first attempt to establish lower complexity bounds of first-order FOMs for solving a composite non-smooth optimization problem.
We find that our method and the proposed IPG method are almostimprovable.
arXiv Detail & Related papers (2023-07-14T19:59:18Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - On the Complexity of a Practical Primal-Dual Coordinate Method [63.899427212054995]
We prove complexity bounds for primal-dual algorithm with random and coordinate descent (PURE-CD)
It has been shown to obtain good extrapolation for solving bi-max performance problems.
arXiv Detail & Related papers (2022-01-19T16:14:27Z) - Adaptive Combinatorial Allocation [77.86290991564829]
We consider settings where an allocation has to be chosen repeatedly, returns are unknown but can be learned, and decisions are subject to constraints.
Our model covers two-sided and one-sided matching, even with complex constraints.
arXiv Detail & Related papers (2020-11-04T15:02:59Z) - A H\"olderian backtracking method for min-max and min-min problems [0.0]
We present a new algorithm to solve min-max or min-min problems out of the convex world.
We use rigidity assumptions, ubiquitous in learning, making our method applicable to many optimization problems.
arXiv Detail & Related papers (2020-07-17T08:12:31Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes [51.55826927508311]
We propose a sparse version of naive Bayes, which can be used for feature selection.
We prove that our convex relaxation bounds becomes tight as the marginal contribution of additional features decreases.
Both binary and multinomial sparse models are solvable in time almost linear in problem size.
arXiv Detail & Related papers (2019-05-23T19:30:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.