Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning
- URL: http://arxiv.org/abs/2302.01578v1
- Date: Fri, 3 Feb 2023 07:15:37 GMT
- Title: Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning
- Authors: Taoan Huang, Aaron Ferber, Yuandong Tian, Bistra Dilkina, Benoit
Steiner
- Abstract summary: Linear Programs (ILPs) are powerful tools for modeling and solving a large number of optimization problems.
Large Neighborhood Search (LNS), as a algorithm, can find high quality solutions to ILPs faster than Branch and Bound.
We propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics.
- Score: 39.40838358438744
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Integer Linear Programs (ILPs) are powerful tools for modeling and solving a
large number of combinatorial optimization problems. Recently, it has been
shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find
high quality solutions to ILPs faster than Branch and Bound. However, how to
find the right heuristics to maximize the performance of LNS remains an open
problem. In this paper, we propose a novel approach, CL-LNS, that delivers
state-of-the-art anytime performance on several ILP benchmarks measured by
metrics including the primal gap, the primal integral, survival rates and the
best performing rate. Specifically, CL-LNS collects positive and negative
solution samples from an expert heuristic that is slow to compute and learns a
new one with a contrastive loss. We use graph attention networks and a richer
set of features to further improve its performance.
Related papers
- Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Local Branching Relaxation Heuristics for Integer Linear Programs [39.40838358438744]
Large Neighborhood Search (LNS) is a popular algorithm for solving optimization problems (COP)
In this paper, we focus on designing effective and efficient linears in LNS for integer programs (ILP)
Our proposed relaxations, LB-RELAX and its variants, use the linear programming of LB to select neighborhoods.
arXiv Detail & Related papers (2022-12-15T22:53:09Z) - Learning Branching Heuristics from Graph Neural Networks [1.4660170768702356]
We first propose a new graph neural network (GNN) model designed using a probabilistic method.
Our approach introduces a new way of applying GNNs towards enhancing the classical backtracking algorithm used in AI.
arXiv Detail & Related papers (2022-11-26T00:01:01Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - A General Large Neighborhood Search Framework for Solving Integer Linear
Programs [46.62993477453986]
We focus on solving integer programs, and ground our approach in the large neighborhood search (SLN) paradigm.
We show that our LNS framework can significantly outperform compared to state-of-the-art commercial solvers such as Gurobi.
arXiv Detail & Related papers (2020-03-29T23:08:14Z) - Learning fine-grained search space pruning and heuristics for
combinatorial optimization [5.72274610208488]
We propose a framework for leveraging machine learning techniques to scale-up exact optimization algorithms.
Our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances.
We show that our framework can prune a large fraction of the input graph and still detect almost all of the maximum cliques.
arXiv Detail & Related papers (2020-01-05T13:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.