RELS-DQN: A Robust and Efficient Local Search Framework for
Combinatorial Optimization
- URL: http://arxiv.org/abs/2304.06048v1
- Date: Tue, 11 Apr 2023 18:01:49 GMT
- Title: RELS-DQN: A Robust and Efficient Local Search Framework for
Combinatorial Optimization
- Authors: Yuanhang Shao, Tonmoy Dey, Nikola Vuckovic, Luke Van Popering, Alan
Kuhnle
- Abstract summary: We introduce RELS-DQN, a lightweight DQN framework that exhibits the local search behavior while providing practical scalability.
Using the RELS-DQN model trained on one application, it can generalize to various applications by providing solution values higher than or equal to both the local search algorithms and the existing DQN models.
- Score: 11.269582666887324
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Combinatorial optimization (CO) aims to efficiently find the best solution to
NP-hard problems ranging from statistical physics to social media marketing. A
wide range of CO applications can benefit from local search methods because
they allow reversible action over greedy policies. Deep Q-learning (DQN) using
message-passing neural networks (MPNN) has shown promise in replicating the
local search behavior and obtaining comparable results to the local search
algorithms. However, the over-smoothing and the information loss during the
iterations of message passing limit its robustness across applications, and the
large message vectors result in memory inefficiency. Our paper introduces
RELS-DQN, a lightweight DQN framework that exhibits the local search behavior
while providing practical scalability. Using the RELS-DQN model trained on one
application, it can generalize to various applications by providing solution
values higher than or equal to both the local search algorithms and the
existing DQN models while remaining efficient in runtime and memory.
Related papers
- MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization [44.24494442399324]
This paper introduces a versatile framework, referred to as Memory-Augmented Reinforcement for Combinatorial Optimization (MARCO)
MARCO stores data collected throughout the optimization trajectory and retrieves contextually relevant information at each state.
Thanks to the parallel nature of NCO models, several search threads can run simultaneously, all sharing the same memory module.
arXiv Detail & Related papers (2024-08-05T03:15:21Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Learning to Control Local Search for Combinatorial Optimization [4.243592852049962]
A zoo of generic as well as problem-specific variants of local search is commonly used to compute approximate solutions.
In this paper we identify three independent algorithmic aspects of such local search algorithms and formalize their sequential selection over an optimization process.
We show that NeuroLS is able to outperform both, well-known general purpose local search controllers from Operations Research as well as latest machine learning-based approaches.
arXiv Detail & Related papers (2022-06-27T10:48:56Z) - Deterministic Iteratively Built KD-Tree with KNN Search for Exact
Applications [2.7325238096808318]
K-Nearest Neighbors (KNN) search is a fundamental algorithm in artificial intelligence software with applications in robotics, and autonomous vehicles.
Similar to binary trees, kd-trees become unbalanced as new data is added in online applications which can lead to rapid degradation in search performance unless the tree is rebuilt.
We will present a "forest of interval kd-trees" which reduces the number of tree rebuilds, without compromising the exactness of query results.
arXiv Detail & Related papers (2021-06-07T17:09:22Z) - Combined Depth Space based Architecture Search For Person
Re-identification [70.86236888223569]
We aim to design a lightweight and suitable network for person re-identification (ReID)
We propose a novel search space called Combined Depth Space (CDS), based on which we search for an efficient network architecture, which we call CDNet.
We then propose a low-cost search strategy named the Top-k Sample Search strategy to make full use of the search space and avoid trapping in local optimal result.
arXiv Detail & Related papers (2021-04-09T02:40:01Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.