Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given
Computational Time Limits
- URL: http://arxiv.org/abs/2309.03924v1
- Date: Thu, 7 Sep 2023 03:04:50 GMT
- Title: Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given
Computational Time Limits
- Authors: Catalina Pezo and Dorit Hochbaum and Julio Godoy and Roberto Asin-Acha
- Abstract summary: Machine learning (ML) techniques have been proposed to automatically select the best solver from a portfolio of solvers.
These methods, known as meta-solvers, take an instance of a problem and a portfolio of solvers as input.
"Anytime" meta-solvers predict the best-performing solver within the specified time limit.
- Score: 0.9831489366502301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) techniques have been proposed to automatically select
the best solver from a portfolio of solvers, based on predicted performance.
These techniques have been applied to various problems, such as Boolean
Satisfiability, Traveling Salesperson, Graph Coloring, and others.
These methods, known as meta-solvers, take an instance of a problem and a
portfolio of solvers as input. They then predict the best-performing solver and
execute it to deliver a solution. Typically, the quality of the solution
improves with a longer computational time. This has led to the development of
anytime selectors, which consider both the instance and a user-prescribed
computational time limit. Anytime meta-solvers predict the best-performing
solver within the specified time limit.
Constructing an anytime meta-solver is considerably more challenging than
building a meta-solver without the "anytime" feature. In this study, we focus
on the task of designing anytime meta-solvers for the NP-hard optimization
problem of Pseudo-Boolean Optimization (PBO), which generalizes Satisfiability
and Maximum Satisfiability problems. The effectiveness of our approach is
demonstrated via extensive empirical study in which our anytime meta-solver
improves dramatically on the performance of Mixed Integer Programming solver
Gurobi, which is the best-performing single solver in the portfolio. For
example, out of all instances and time limits for which Gurobi failed to find
feasible solutions, our meta-solver identified feasible solutions for 47% of
these.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - A Predictive Approach for Selecting the Best Quantum Solver for an Optimization Problem [2.9730678241643815]
We propose a predictive solver selection approach based on supervised machine learning.
In more than 70% of the cases, the best solver is selected, and in about 90% of the problems, a solver in the top two is selected.
This exploration proves the potential of machine learning in quantum solver selection and lays the foundations for its automation.
arXiv Detail & Related papers (2024-08-07T08:14:58Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Learning Primal Heuristics for Mixed Integer Programs [5.766851255770718]
We investigate whether effective primals can be automatically learned via machine learning.
We propose a new method to represent an optimization problem as a graph, and train a Graph Conal Network on solved problem instances with known optimal solutions.
The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depth-first Search.
arXiv Detail & Related papers (2021-07-02T06:46:23Z) - Learning to Schedule Heuristics in Branch-and-Bound [25.79025327341732]
Real-world applications typically require finding good solutions early in the search to enable fast decision-making.
We propose the first data-driven framework for schedulings in an exact MIP solver.
Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on a class of challenging instances.
arXiv Detail & Related papers (2021-03-18T14:49:52Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - Combining Reinforcement Learning and Constraint Programming for
Combinatorial Optimization [5.669790037378094]
The goal is to find an optimal solution among a finite set of possibilities.
Deep reinforcement learning (DRL) has shown its promise for solving NP-hard optimization problems.
constraint programming (CP) is a generic tool to solve optimization problems.
In this work, we propose a general and hybrid approach, based on DRL and CP, for solving optimization problems.
arXiv Detail & Related papers (2020-06-02T13:54:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.