Learning Primal Heuristics for Mixed Integer Programs
- URL: http://arxiv.org/abs/2107.00866v1
- Date: Fri, 2 Jul 2021 06:46:23 GMT
- Title: Learning Primal Heuristics for Mixed Integer Programs
- Authors: Yunzhuang Shen, Yuan Sun, Andrew Eberhard, Xiaodong Li
- Abstract summary: We investigate whether effective primals can be automatically learned via machine learning.
We propose a new method to represent an optimization problem as a graph, and train a Graph Conal Network on solved problem instances with known optimal solutions.
The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depth-first Search.
- Score: 5.766851255770718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a novel primal heuristic for Mixed Integer Programs, by
employing machine learning techniques. Mixed Integer Programming is a general
technique for formulating combinatorial optimization problems. Inside a solver,
primal heuristics play a critical role in finding good feasible solutions that
enable one to tighten the duality gap from the outset of the Branch-and-Bound
algorithm (B&B), greatly improving its performance by pruning the B&B tree
aggressively. In this paper, we investigate whether effective primal heuristics
can be automatically learned via machine learning. We propose a new method to
represent an optimization problem as a graph, and train a Graph Convolutional
Network on solved problem instances with known optimal solutions. This in turn
can predict the values of decision variables in the optimal solution for an
unseen problem instance of a similar type. The prediction of variable solutions
is then leveraged by a novel configuration of the B&B method, Probabilistic
Branching with guided Depth-first Search (PB-DFS) approach, aiming to find
(near-)optimal solutions quickly. The experimental results show that this new
heuristic can find better primal solutions at a much earlier stage of the
solving process, compared to other state-of-the-art primal heuristics.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Taking the human out of decomposition-based optimization via artificial
intelligence: Part I. Learning when to decompose [0.0]
We propose a graph classification approach for automatically determining whether to use a monolithic or a decomposition-based solution method.
It is shown how the learned classifier can be incorporated into existing structural mixed integer optimization solvers.
arXiv Detail & Related papers (2023-10-10T23:31:06Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - End-to-End Constrained Optimization Learning: A Survey [69.22203885491534]
It focuses on surveying the work on integrating solvers and optimization methods with machine learning architectures.
These approaches hold the promise to develop new hybrid machine learning and optimization methods to predict fast, approximate, structural, solutions to problems and to enable logical inference.
arXiv Detail & Related papers (2021-03-30T14:19:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - First-Order Methods for Convex Optimization [2.578242050187029]
First-order methods have the potential to provide low accuracy solutions at low computational complexity.
We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.
arXiv Detail & Related papers (2021-01-04T13:03:38Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - A Hybrid Framework Using a QUBO Solver For Permutation-Based
Combinatorial Optimization [5.460573052311485]
We propose a hybrid framework to solve large-scale permutation-based problems using a high-performance quadratic unconstrained binary optimization solver.
We propose techniques to overcome the challenges in using a QUBO solver that typically comes with limited numbers of bits.
arXiv Detail & Related papers (2020-09-27T07:15:25Z) - Gumbel-softmax-based Optimization: A Simple General Framework for
Optimization Problems on Graphs [5.486093983007419]
We propose a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks.
High-quality solutions can be obtained with much less time consuming compared to traditional approaches.
arXiv Detail & Related papers (2020-04-14T14:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.