Learning a Large Neighborhood Search Algorithm for Mixed Integer
Programs
- URL: http://arxiv.org/abs/2107.10201v2
- Date: Thu, 22 Jul 2021 20:37:42 GMT
- Title: Learning a Large Neighborhood Search Algorithm for Mixed Integer
Programs
- Authors: Nicolas Sonnerat, Pengming Wang, Ira Ktena, Sergey Bartunov, Vinod
Nair
- Abstract summary: We consider a learning-based LNS approach for mixed integer programs (MIPs)
We train a Neural Diving model to represent a probability distribution over assignments, which, together with an off-the-shelf MIP solver, generates an initial assignment.
We train a Neural Neighborhood Selection policy to select a search neighborhood at each step, which is searched using a MIP solver to find the next assignment.
- Score: 6.084888301899142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Neighborhood Search (LNS) is a combinatorial optimization heuristic
that starts with an assignment of values for the variables to be optimized, and
iteratively improves it by searching a large neighborhood around the current
assignment. In this paper we consider a learning-based LNS approach for mixed
integer programs (MIPs). We train a Neural Diving model to represent a
probability distribution over assignments, which, together with an
off-the-shelf MIP solver, generates an initial assignment. Formulating the
subsequent search steps as a Markov Decision Process, we train a Neural
Neighborhood Selection policy to select a search neighborhood at each step,
which is searched using a MIP solver to find the next assignment. The policy
network is trained using imitation learning. We propose a target policy for
imitation that, given enough compute resources, is guaranteed to select the
neighborhood containing the optimal next assignment amongst all possible
choices for the neighborhood of a specified size. Our approach matches or
outperforms all the baselines on five real-world MIP datasets with large-scale
instances from diverse applications, including two production applications at
Google. It achieves $2\times$ to $37.8\times$ better average primal gap than
the best baseline on three of the datasets at large running times.
Related papers
- PECANN: Parallel Efficient Clustering with Graph-Based Approximate
Nearest Neighbor Search [8.15681999722805]
This paper studies density-based clustering of point sets.
It unifies the different variants of density peaks clustering into a single framework, PECANN.
We implement five clustering algorithms with PECANN and evaluate them on synthetic and real-world datasets with up to 1.28 million points and up to 1024 dimensions on a 30-core machine with two-way hyper-threading.
arXiv Detail & Related papers (2023-12-06T22:43:50Z) - Threshold-aware Learning to Generate Feasible Solutions for Mixed
Integer Programs [5.28005598366543]
Neural diving (ND) is one of the learning-based approaches to generating partial discrete variable assignments in Mixed Programs (MIP)
We introduce a post-hoc method and a learning-based approach for optimizing the coverage.
Experimental results demonstrate that learning a deep neural network to estimate the coverage for finding high-quality feasible solutions achieves state-of-the-art performance in NeurIPS ML4CO datasets.
arXiv Detail & Related papers (2023-08-01T07:03:16Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Multi-Task Learning for Sparsity Pattern Heterogeneity: Statistical and Computational Perspectives [10.514866749547558]
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets.
A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks.
Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar.
This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks.
arXiv Detail & Related papers (2022-12-16T19:52:25Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Deep Retrieval: Learning A Retrievable Structure for Large-Scale
Recommendations [21.68175843347951]
We present Deep Retrieval (DR), to learn a retrievable structure directly with user-item interaction data.
DR is among the first non-ANN algorithms successfully deployed at the scale of hundreds of millions of items for industrial recommendation systems.
arXiv Detail & Related papers (2020-07-12T06:23:51Z) - MOPS-Net: A Matrix Optimization-driven Network forTask-Oriented 3D Point
Cloud Downsampling [86.42733428762513]
MOPS-Net is a novel interpretable deep learning-based method for matrix optimization.
We show that MOPS-Net can achieve favorable performance against state-of-the-art deep learning-based methods over various tasks.
arXiv Detail & Related papers (2020-05-01T14:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.