Learning to Control Local Search for Combinatorial Optimization
- URL: http://arxiv.org/abs/2206.13181v1
- Date: Mon, 27 Jun 2022 10:48:56 GMT
- Title: Learning to Control Local Search for Combinatorial Optimization
- Authors: Jonas K. Falkner, Daniela Thyssens, Ahmad Bdeir, and Lars
Schmidt-Thieme
- Abstract summary: A zoo of generic as well as problem-specific variants of local search is commonly used to compute approximate solutions.
In this paper we identify three independent algorithmic aspects of such local search algorithms and formalize their sequential selection over an optimization process.
We show that NeuroLS is able to outperform both, well-known general purpose local search controllers from Operations Research as well as latest machine learning-based approaches.
- Score: 4.243592852049962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combinatorial optimization problems are encountered in many practical
contexts such as logistics and production, but exact solutions are particularly
difficult to find and usually NP-hard for considerable problem sizes. To
compute approximate solutions, a zoo of generic as well as problem-specific
variants of local search is commonly used. However, which variant to apply to
which particular problem is difficult to decide even for experts.
In this paper we identify three independent algorithmic aspects of such local
search algorithms and formalize their sequential selection over an optimization
process as Markov Decision Process (MDP). We design a deep graph neural network
as policy model for this MDP, yielding a learned controller for local search
called NeuroLS. Ample experimental evidence shows that NeuroLS is able to
outperform both, well-known general purpose local search controllers from
Operations Research as well as latest machine learning-based approaches.
Related papers
- Modeling Local Search Metaheuristics Using Markov Decision Processes [0.0]
We introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics.
This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff.
arXiv Detail & Related papers (2024-07-29T11:28:30Z) - Moco: A Learnable Meta Optimizer for Combinatorial Optimization [5.359176539960004]
Moco learns a graph neural network that updates the solution construction procedure based on features extracted from the current search state.
This meta training procedure targets the overall best solution found during the search procedure given information such as the search budget.
Moco is a fully learnable meta that does not utilize any problem specific local search or decomposition.
arXiv Detail & Related papers (2024-02-07T14:41:17Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - New Characterizations and Efficient Local Search for General Integer
Linear Programming [17.80124240223163]
This work proposes new characterizations of linear programming (ILP) with the concept of boundary solutions.
We develop a new local search algorithm Local-ILP, which is efficient for solving general ILP validated on a large heterogeneous problem dataset.
Experiments conducted on the MIPLIB dataset show the effectiveness of our algorithm in solving large-scale hard ILP problems.
arXiv Detail & Related papers (2023-04-29T07:22:07Z) - RELS-DQN: A Robust and Efficient Local Search Framework for
Combinatorial Optimization [11.269582666887324]
We introduce RELS-DQN, a lightweight DQN framework that exhibits the local search behavior while providing practical scalability.
Using the RELS-DQN model trained on one application, it can generalize to various applications by providing solution values higher than or equal to both the local search algorithms and the existing DQN models.
arXiv Detail & Related papers (2023-04-11T18:01:49Z) - Frequent Itemset-driven Search for Finding Minimum Node Separators in
Complex Networks [61.2383572324176]
We propose a frequent itemset-driven search approach, which integrates the concept of frequent itemset mining in data mining into the well-known memetic search framework.
It iteratively employs the frequent itemset recombination operator to generate promising offspring solution based on itemsets that frequently occur in high-quality solutions.
In particular, it discovers 29 new upper bounds and matches 18 previous best-known bounds.
arXiv Detail & Related papers (2022-01-18T11:16:40Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Adaptive Discretization in Online Reinforcement Learning [9.560980936110234]
Two major questions in designing discretization-based algorithms are how to create the discretization and when to refine it.
We provide a unified theoretical analysis of tree-based hierarchical partitioning methods for online reinforcement learning.
Our algorithms are easily adapted to operating constraints, and our theory provides explicit bounds across each of the three facets.
arXiv Detail & Related papers (2021-10-29T15:06:15Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.