Automatic MILP Solver Configuration By Learning Problem Similarities
- URL: http://arxiv.org/abs/2307.00670v1
- Date: Sun, 2 Jul 2023 21:31:47 GMT
- Title: Automatic MILP Solver Configuration By Learning Problem Similarities
- Authors: Abdelrahman Hosny, Sherief Reda
- Abstract summary: Mixed Linear Programs (MILP) solvers expose numerous configuration parameters to control their internal algorithms.
We aim to predict configuration parameters for unseen problem instances that yield lower-cost solutions without the time overhead of searching-and-evaluating configurations.
We show that instances that have similar costs using one solver configuration also have similar costs using another solver configuration in the same runtime environment.
- Score: 1.1585113506994469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large number of real-world optimization problems can be formulated as Mixed
Integer Linear Programs (MILP). MILP solvers expose numerous configuration
parameters to control their internal algorithms. Solutions, and their
associated costs or runtimes, are significantly affected by the choice of the
configuration parameters, even when problem instances have the same number of
decision variables and constraints. On one hand, using the default solver
configuration leads to suboptimal solutions. On the other hand, searching and
evaluating a large number of configurations for every problem instance is
time-consuming and, in some cases, infeasible. In this study, we aim to predict
configuration parameters for unseen problem instances that yield lower-cost
solutions without the time overhead of searching-and-evaluating configurations
at the solving time. Toward that goal, we first investigate the cost
correlation of MILP problem instances that come from the same distribution when
solved using different configurations. We show that instances that have similar
costs using one solver configuration also have similar costs using another
solver configuration in the same runtime environment. After that, we present a
methodology based on Deep Metric Learning to learn MILP similarities that
correlate with their final solutions' costs. At inference time, given a new
problem instance, it is first projected into the learned metric space using the
trained model, and configuration parameters are instantly predicted using
previously-explored configurations from the nearest neighbor instance in the
learned embedding space. Empirical results on real-world problem benchmarks
show that our method predicts configuration parameters that improve solutions'
costs by up to 38% compared to existing approaches.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Spatial-temporal-demand clustering for solving large-scale vehicle
routing problems with time windows [0.0]
We propose a decompose-route-improve (DRI) framework that groups customers using clustering.
Its similarity metric incorporates customers' spatial, temporal, and demand data.
We apply pruned local search (LS) between solved subproblems to improve the overall solution.
arXiv Detail & Related papers (2024-01-20T06:06:01Z) - Learning to Configure Mathematical Programming Solvers by Mathematical
Programming [0.8075866265341176]
We discuss the issue of finding a good mathematical programming solver configuration for a particular instance of a given problem.
A specific difficulty of learning a good solver configuration is that parameter settings may not all be independent.
We tackle this issue in the second phase of our approach, where we use the learnt information to construct and solve an optimization problem.
arXiv Detail & Related papers (2024-01-10T10:02:01Z) - A learning-based mathematical programming formulation for the automatic
configuration of optimization solvers [0.8075866265341176]
We employ a set of solved instances and configurations in order to learn a performance function of the solver.
We formulate a mixed-integer nonlinear program where the objective/constraints explicitly encode the learnt information.
arXiv Detail & Related papers (2024-01-08T21:10:56Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - Factorization of Multi-Agent Sampling-Based Motion Planning [72.42734061131569]
Modern robotics often involves multiple embodied agents operating within a shared environment.
Standard sampling-based algorithms can be used to search for solutions in the robots' joint space.
We integrate the concept of factorization into sampling-based algorithms, which requires only minimal modifications to existing methods.
We present a general implementation of a factorized SBA, derive an analytical gain in terms of sample complexity for PRM*, and showcase empirical results for RRG.
arXiv Detail & Related papers (2023-04-01T15:50:18Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Learning Hard Optimization Problems: A Data Generation Perspective [44.4747903763245]
This paper demonstrates the volatility of the training data to the ability to approximate it.
It proposes a method for producing (exact or approximate) solutions to optimization problems that are amenable to supervised tasks.
arXiv Detail & Related papers (2021-06-04T17:03:44Z) - Memory Clustering using Persistent Homology for Multimodality- and
Discontinuity-Sensitive Learning of Optimal Control Warm-starts [24.576214898129823]
Shooting methods are an efficient approach to solving nonlinear optimal control problems.
Recent work has focused on providing an initial guess from a learned model trained on samples generated during an offline exploration of the problem space.
In this work, we apply tools from algebraic topology to extract information on the underlying structure of the solution space.
arXiv Detail & Related papers (2020-10-02T14:24:59Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.