Learning To Dive In Branch And Bound
- URL: http://arxiv.org/abs/2301.09943v1
- Date: Tue, 24 Jan 2023 12:01:45 GMT
- Title: Learning To Dive In Branch And Bound
- Authors: Max B. Paulus, Andreas Krause
- Abstract summary: We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
- Score: 95.13209326119153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Primal heuristics are important for solving mixed integer linear programs,
because they find feasible solutions that facilitate branch and bound search. A
prominent group of primal heuristics are diving heuristics. They iteratively
modify and resolve linear programs to conduct a depth-first search from any
node in the search tree. Existing divers rely on generic decision rules that
fail to exploit structural commonality between similar problem instances that
often arise in practice. Therefore, we propose L2Dive to learn specific diving
heuristics with graph neural networks: We train generative models to predict
variable assignments and leverage the duality of linear programs to make diving
decisions based on the model's predictions. L2Dive is fully integrated into the
open-source solver SCIP. We find that L2Dive outperforms standard divers to
find better feasible solutions on a range of combinatorial optimization
problems. For real-world applications from server load balancing and neural
network verification, L2Dive improves the primal-dual integral by up to 7%
(35%) on average over a tuned (default) solver baseline and reduces average
solving time by 20% (29%).
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Learning Branching Heuristics from Graph Neural Networks [1.4660170768702356]
We first propose a new graph neural network (GNN) model designed using a probabilistic method.
Our approach introduces a new way of applying GNNs towards enhancing the classical backtracking algorithm used in AI.
arXiv Detail & Related papers (2022-11-26T00:01:01Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - DOGE-Train: Discrete Optimization on GPU with End-to-end Training [28.795080637690095]
We present a fast, scalable, data-driven approach for solving relaxations of 0-1 integer linear programs.
We use a combination of graph neural networks (GNN) and the Lagrange decomposition based algorithm FastDOG.
arXiv Detail & Related papers (2022-05-23T21:09:41Z) - What's Wrong with Deep Learning in Tree Search for Combinatorial
Optimization [8.879790406465556]
We present an open-source benchmark suite for the NP-hard Maximum Independent Set problem, in both its weighted and unweighted variants.
We also conduct an in-depth analysis of the popular guided tree search algorithm by Li et al. [NeurIPS 2018], testing various configurations on small and large synthetic and real-world graphs.
We show that the graph convolution network used in the tree search does not learn a meaningful representation of the solution structure, and can in fact be replaced by random values.
arXiv Detail & Related papers (2022-01-25T17:37:34Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - A generalized quadratic loss for SVM and Deep Neural Networks [0.0]
We consider some supervised binary classification tasks and a regression task, whereas SVM and Deep Learning, at present, exhibit the best generalization performances.
We extend the work [3] on a generalized quadratic loss for learning problems that examines pattern correlations in order to concentrate the learning problem into input space regions where patterns are more densely distributed.
arXiv Detail & Related papers (2021-02-15T15:49:08Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.