Improving the filtering of Branch-And-Bound MDD solver (extended)
- URL: http://arxiv.org/abs/2104.11951v1
- Date: Sat, 24 Apr 2021 13:42:42 GMT
- Title: Improving the filtering of Branch-And-Bound MDD solver (extended)
- Authors: Xavier Gillard, Vianney Copp\'e, Pierre Schaus, Andr\'e Augusto Cire
- Abstract summary: This paper presents and evaluates two pruning techniques to reinforce the efficiency of constraint optimization solvers based on multi-valued decision-diagrams (MDDs)
It adopts the branch-and-bound framework proposed by Bergman et al. in 2016 to solve dynamic programs to optimality.
- Score: 11.728147523456702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents and evaluates two pruning techniques to reinforce the
efficiency of constraint optimization solvers based on multi-valued
decision-diagrams (MDD). It adopts the branch-and-bound framework proposed by
Bergman et al. in 2016 to solve dynamic programs to optimality. In particular,
our paper presents and evaluates the effectiveness of the local-bound (LocB)
and rough upper-bound pruning (RUB). LocB is a new and effective rule that
leverages the approximate MDD structure to avoid the exploration of
non-interesting nodes. RUB is a rule to reduce the search space during the
development of bounded-width-MDDs. The experimental study we conducted on the
Maximum Independent Set Problem (MISP), Maximum Cut Problem (MCP), Maximum 2
Satisfiability (MAX2SAT) and the Traveling Salesman Problem with Time Windows
(TSPTW) shows evidence indicating that rough-upper-bound and local-bound
pruning have a high impact on optimization solvers based on branch-and-bound
with MDDs. In particular, it shows that RUB delivers excellent results but
requires some effort when defining the model. Also, it shows that LocB provides
a significant improvement automatically; without necessitating any
user-supplied information. Finally, it also shows that rough-upper-bound and
local-bound pruning are not mutually exclusive, and their combined benefit
supersedes the individual benefit of using each technique.
Related papers
- LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - MORBDD: Multiobjective Restricted Binary Decision Diagrams by Learning
to Sparsify [19.821484909092913]
We focus on binary decision diagrams (BDDs) which first construct a graph that represents all feasible solutions to the problem.
We explore how restricted BDDs can be adapted to multiobjective optimization through the use of machine learning (ML)
MorBDD is highly effective at producing very small restricted BDDs with excellent approximation quality, outperforming width-limited restricted BDDs and the well-known evolutionary algorithm NSGA-II.
arXiv Detail & Related papers (2024-03-04T21:04:54Z) - Implicitly normalized forecaster with clipping for linear and non-linear
heavy-tailed multi-armed bandits [85.27420062094086]
Implicitly Normalized Forecaster (INF) is considered an optimal solution for adversarial multi-armed bandit (MAB) problems.
We propose a new version of INF called the Implicitly Normalized Forecaster with clipping (INFclip) for MAB problems with heavy-tailed settings.
We demonstrate that INFclip is optimal for linear heavy-tailed MAB problems and works well for non-linear ones.
arXiv Detail & Related papers (2023-05-11T12:00:43Z) - Advancing Model Pruning via Bi-level Optimization [89.88761425199598]
iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'
One-shot pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.
We show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure.
arXiv Detail & Related papers (2022-10-08T19:19:29Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Strong Optimal Classification Trees [8.10995244893652]
We propose an intuitive flow-based MIO formulation for learning optimal binary classification trees.
Our formulation can accommodate side constraints to enable the design of interpretable and fair decision trees.
We show that our proposed approaches are 29 times faster than state-of-the-art MIO-based techniques.
arXiv Detail & Related papers (2021-03-29T21:40:58Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Continuous Profit Maximization: A Study of Unconstrained Dr-submodular
Maximization [4.649999862713523]
We form continuous profit (CPM-MS) problem, whose domain is on integer lattices.
We introduce lattice-based double greedy algorithm, which can obtain a constant approximation.
We propose a technique, called lattice-based iterative pruning. It can shrink the search space effectively.
arXiv Detail & Related papers (2020-04-12T05:35:19Z) - Reinforcement Learning in Factored MDPs: Oracle-Efficient Algorithms and
Tighter Regret Bounds for the Non-Episodic Setting [24.90164851620799]
We study reinforcement learning in non-episodic factored Markov decision processes (FMDPs)
We propose two near-optimal and oracle-efficient algorithms for FMDPs.
Our oracle-efficient algorithms outperform previously proposed near-optimal algorithms on computer network administration simulations.
arXiv Detail & Related papers (2020-02-06T15:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.