Local Branching Relaxation Heuristics for Integer Linear Programs
- URL: http://arxiv.org/abs/2212.08183v2
- Date: Wed, 31 May 2023 18:16:01 GMT
- Title: Local Branching Relaxation Heuristics for Integer Linear Programs
- Authors: Taoan Huang, Aaron Ferber, Yuandong Tian, Bistra Dilkina, Benoit
Steiner
- Abstract summary: Large Neighborhood Search (LNS) is a popular algorithm for solving optimization problems (COP)
In this paper, we focus on designing effective and efficient linears in LNS for integer programs (ILP)
Our proposed relaxations, LB-RELAX and its variants, use the linear programming of LB to select neighborhoods.
- Score: 39.40838358438744
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Neighborhood Search (LNS) is a popular heuristic algorithm for solving
combinatorial optimization problems (COP). It starts with an initial solution
to the problem and iteratively improves it by searching a large neighborhood
around the current best solution. LNS relies on heuristics to select
neighborhoods to search in. In this paper, we focus on designing effective and
efficient heuristics in LNS for integer linear programs (ILP) since a wide
range of COPs can be represented as ILPs. Local Branching (LB) is a heuristic
that selects the neighborhood that leads to the largest improvement over the
current solution in each iteration of LNS. LB is often slow since it needs to
solve an ILP of the same size as input. Our proposed heuristics, LB-RELAX and
its variants, use the linear programming relaxation of LB to select
neighborhoods. Empirically, LB-RELAX and its variants compute as effective
neighborhoods as LB but run faster. They achieve state-of-the-art anytime
performance on several ILP benchmarks.
Related papers
- Learning-Enhanced Neighborhood Selection for the Vehicle Routing Problem with Time Windows [0.0]
We propose to integrate machine learning (ML) into Large Neighborhood Search (LNS) to decide which parts of the solution should be destroyed and repaired in each iteration of LNS.
Our approach is universally applicable, i.e., it can be applied to any LNS algorithm to amplify the workings of the destroy algorithm.
arXiv Detail & Related papers (2024-03-13T12:08:27Z) - Adaptive Anytime Multi-Agent Path Finding Using Bandit-Based Large
Neighborhood Search [30.364955687049292]
State-of-the-art anytime MAPF is based on Large Neighborhood Search (LNS)
We propose Bandit-based Adaptive LArge Neighborhood search Combined with Exploration (BALANCE)
We empirically demonstrate cost improvements of at least 50% compared to state-of-the-art anytime MAPF in large-scale scenarios.
arXiv Detail & Related papers (2023-12-28T01:24:42Z) - Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning [39.40838358438744]
Linear Programs (ILPs) are powerful tools for modeling and solving a large number of optimization problems.
Large Neighborhood Search (LNS), as a algorithm, can find high quality solutions to ILPs faster than Branch and Bound.
We propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics.
arXiv Detail & Related papers (2023-02-03T07:15:37Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Learning to Search in Local Branching [3.3946853660795884]
Local branching (LB) has been proposed to produce improving solutions to mixed-integer linear programming problems.
For a LB algorithm, the choice of the neighborhood size is critical to performance.
In this work, we investigate the relation between the size of the search neighborhood and the behavior of the underlying LB algorithm.
arXiv Detail & Related papers (2021-12-03T23:54:29Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - A General Large Neighborhood Search Framework for Solving Integer Linear
Programs [46.62993477453986]
We focus on solving integer programs, and ground our approach in the large neighborhood search (SLN) paradigm.
We show that our LNS framework can significantly outperform compared to state-of-the-art commercial solvers such as Gurobi.
arXiv Detail & Related papers (2020-03-29T23:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.