MORBDD: Multiobjective Restricted Binary Decision Diagrams by Learning
to Sparsify
- URL: http://arxiv.org/abs/2403.02482v1
- Date: Mon, 4 Mar 2024 21:04:54 GMT
- Title: MORBDD: Multiobjective Restricted Binary Decision Diagrams by Learning
to Sparsify
- Authors: Rahul Patel, Elias B. Khalil, David Bergman
- Abstract summary: We focus on binary decision diagrams (BDDs) which first construct a graph that represents all feasible solutions to the problem.
We explore how restricted BDDs can be adapted to multiobjective optimization through the use of machine learning (ML)
MorBDD is highly effective at producing very small restricted BDDs with excellent approximation quality, outperforming width-limited restricted BDDs and the well-known evolutionary algorithm NSGA-II.
- Score: 19.821484909092913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multicriteria decision-making, a user seeks a set of non-dominated
solutions to a (constrained) multiobjective optimization problem, the so-called
Pareto frontier. In this work, we seek to bring a state-of-the-art method for
exact multiobjective integer linear programming into the heuristic realm. We
focus on binary decision diagrams (BDDs) which first construct a graph that
represents all feasible solutions to the problem and then traverse the graph to
extract the Pareto frontier. Because the Pareto frontier may be exponentially
large, enumerating it over the BDD can be time-consuming. We explore how
restricted BDDs, which have already been shown to be effective as heuristics
for single-objective problems, can be adapted to multiobjective optimization
through the use of machine learning (ML). MORBDD, our ML-based BDD sparsifier,
first trains a binary classifier to eliminate BDD nodes that are unlikely to
contribute to Pareto solutions, then post-processes the sparse BDD to ensure
its connectivity via optimization. Experimental results on multiobjective
knapsack problems show that MORBDD is highly effective at producing very small
restricted BDDs with excellent approximation quality, outperforming
width-limited restricted BDDs and the well-known evolutionary algorithm
NSGA-II.
Related papers
- Balancing Pareto Front exploration of Non-dominated Tournament Genetic Algorithm (B-NTGA) in solving multi-objective NP-hard problems with constraints [0.0]
The paper presents a new balanced selection operator applied to the proposed Balanced Non-dominated Tournament Genetic Algorithm (B-NTGA)
The proposed B-NTGA is investigated on two benchmark multi- and many-objective optimization real-world problems, like Thief Traveling Problem and Multi-Skill Resource-Constrained Project Scheduling Problem.
The results of experiments show that B-NTGA has a higher efficiency and better performance than state-of-the-art methods.
arXiv Detail & Related papers (2024-10-08T05:34:13Z) - Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning [10.116674195405126]
We argue that a precise characterization of the underlying optimization problem must specify the inference task associated with the application of interest.
Our formalization reveals novel applications of DD across different modeling environments.
We present numerical results for two case studies important in contemporary settings.
arXiv Detail & Related papers (2024-09-02T18:11:15Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Double-Bounded Optimal Transport for Advanced Clustering and
Classification [58.237576976486544]
We propose Doubly Bounded Optimal Transport (DB-OT), which assumes that the target distribution is restricted within two boundaries instead of a fixed one.
We show that our method can achieve good results with our improved inference scheme in the testing stage.
arXiv Detail & Related papers (2024-01-21T07:43:01Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees
with Continuous Features [5.663538370244174]
We present a new discrete optimization method based on branch-and-bound (BnB) to obtain optimal decision trees.
Our proposed algorithm Quant-BnB shows significant speedups compared to existing approaches for shallow optimal trees on various real datasets.
arXiv Detail & Related papers (2022-06-23T17:19:29Z) - Meta-Learning Adversarial Bandits [49.094361442409785]
We study online learning with bandit feedback across multiple tasks, with the goal of improving average performance across tasks if they are similar according to some natural task-similarity measure.
As the first to target the adversarial setting, we design a meta-algorithm that setting-specific guarantees for two important cases: multi-armed bandits (MAB) and bandit optimization (BLO)
Our guarantees rely on proving that unregularized follow-the-leader combined with multiplicative weights is enough to online learn a non-smooth and non-B sequence.
arXiv Detail & Related papers (2022-05-27T17:40:32Z) - Optimizing Binary Decision Diagrams with MaxSAT for classification [3.2894524838755608]
A growing interest in explainable artificial intelligence motivates the need for interpretable machine learning (ML) models.
Recently, several exact methods for computing such models are proposed to overcome weaknesses of traditional methods.
In this paper, we first propose SAT-based models for learning optimal Binary decision diagrams (BDDs)
Then, we lift the encoding to a MaxSAT model to learn optimal BDDs in limited depths.
Finally, we tackle the fragmentation problem by introducing a method to merge compatible subtrees for the BDDs found via the MaxSAT model.
arXiv Detail & Related papers (2022-03-21T23:17:37Z) - Improving the filtering of Branch-And-Bound MDD solver (extended) [11.728147523456702]
This paper presents and evaluates two pruning techniques to reinforce the efficiency of constraint optimization solvers based on multi-valued decision-diagrams (MDDs)
It adopts the branch-and-bound framework proposed by Bergman et al. in 2016 to solve dynamic programs to optimality.
arXiv Detail & Related papers (2021-04-24T13:42:42Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.