SPL-LNS: Sampling-Enhanced Large Neighborhood Search for Solving Integer Linear Programs
- URL: http://arxiv.org/abs/2508.16171v1
- Date: Fri, 22 Aug 2025 07:49:33 GMT
- Title: SPL-LNS: Sampling-Enhanced Large Neighborhood Search for Solving Integer Linear Programs
- Authors: Shengyu Feng, Zhiqing Sun, Yiming Yang,
- Abstract summary: Large Neighborhood Search (LNS) is a common in optimization that iteratively searches over a large neighborhood of the current solution for a better one.<n>This paper introduces SPL-LNS, a sampling-enhanced neural LNS solver that leverages locally-informed proposals to escape local optima.
- Score: 59.91461374628034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Neighborhood Search (LNS) is a common heuristic in combinatorial optimization that iteratively searches over a large neighborhood of the current solution for a better one. Recently, neural network-based LNS solvers have achieved great success in solving Integer Linear Programs (ILPs) by learning to greedily predict the locally optimal solution for the next neighborhood proposal. However, this greedy approach raises two key concerns: (1) to what extent this greedy proposal suffers from local optima, and (2) how can we effectively improve its sample efficiency in the long run. To address these questions, this paper first formulates LNS as a stochastic process, and then introduces SPL-LNS, a sampling-enhanced neural LNS solver that leverages locally-informed proposals to escape local optima. We also develop a novel hindsight relabeling method to efficiently train SPL-LNS on self-generated data. Experimental results demonstrate that SPL-LNS substantially surpasses prior neural LNS solvers for various ILP problems of different sizes.
Related papers
- HyP-ASO: A Hybrid Policy-based Adaptive Search Optimization Framework for Large-Scale Integer Linear Programs [29.446016007712114]
HyP-ASO is a hybrid policy-based adaptive search optimization framework that combines a customized formula with deep Reinforcement Learning (RL)<n>Experiments demonstrate that HyP-ASO significantly outperforms existing LNS-based approaches for large-scale ILPs.<n>Additional experiments show it is lightweight and highly scalable, making it well-suited for solving large-scale ILPs.
arXiv Detail & Related papers (2025-09-19T09:59:52Z) - GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.<n>This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.<n>We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming [36.13745722329505]
We propose an FOM-unrolled neural network (NN) called PDHG-Net to solve large-scale linear programming problems.
Experiments show that our approach can accelerate solving, achieving up to a 3$times$ speedup compared to FOMs for large-scale LP problems.
arXiv Detail & Related papers (2024-06-04T02:39:42Z) - Learning-Enhanced Neighborhood Selection for the Vehicle Routing Problem with Time Windows [0.0]
We propose to integrate machine learning (ML) into Large Neighborhood Search (LNS) to decide which parts of the solution should be destroyed and repaired in each iteration of LNS.
Our approach is universally applicable, i.e., it can be applied to any LNS algorithm to amplify the workings of the destroy algorithm.
arXiv Detail & Related papers (2024-03-13T12:08:27Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning [39.40838358438744]
Linear Programs (ILPs) are powerful tools for modeling and solving a large number of optimization problems.
Large Neighborhood Search (LNS), as a algorithm, can find high quality solutions to ILPs faster than Branch and Bound.
We propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics.
arXiv Detail & Related papers (2023-02-03T07:15:37Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Local Branching Relaxation Heuristics for Integer Linear Programs [39.40838358438744]
Large Neighborhood Search (LNS) is a popular algorithm for solving optimization problems (COP)
In this paper, we focus on designing effective and efficient linears in LNS for integer programs (ILP)
Our proposed relaxations, LB-RELAX and its variants, use the linear programming of LB to select neighborhoods.
arXiv Detail & Related papers (2022-12-15T22:53:09Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.