UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems
- URL: http://arxiv.org/abs/2407.00312v3
- Date: Tue, 29 Oct 2024 15:56:09 GMT
- Title: UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems
- Authors: Zhi Zheng, Changliang Zhou, Tong Xialiang, Mingxuan Yuan, Zhenkun Wang,
- Abstract summary: Two-stage neural methods motivated by divide-and-conquer strategies have shown efficiency in addressing large-scale CO problems.
This article develops a unified neural divide-and-conquer framework (i.e., UDC) for solving general large-scale CO problems.
- Score: 8.871356150316224
- License:
- Abstract: Single-stage neural combinatorial optimization solvers have achieved near-optimal results on various small-scale combinatorial optimization (CO) problems without requiring expert knowledge. However, these solvers exhibit significant performance degradation when applied to large-scale CO problems. Recently, two-stage neural methods motivated by divide-and-conquer strategies have shown efficiency in addressing large-scale CO problems. Nevertheless, the performance of these methods highly relies on problem-specific heuristics in either the dividing or the conquering procedure, which limits their applicability to general CO problems. Moreover, these methods employ separate training schemes and ignore the interdependencies between the dividing and conquering strategies, often leading to sub-optimal solutions. To tackle these drawbacks, this article develops a unified neural divide-and-conquer framework (i.e., UDC) for solving general large-scale CO problems. UDC offers a Divide-Conquer-Reunion (DCR) training method to eliminate the negative impact of a sub-optimal dividing policy. Employing a high-efficiency Graph Neural Network (GNN) for global instance dividing and a fixed-length sub-path solver for conquering divided sub-problems, the proposed UDC framework demonstrates extensive applicability, achieving superior performance in 10 representative large-scale CO problems. The code is available at https://github.com/CIAM-Group/NCO_code/tree/main/single_objective/UDC-Large-scale-CO-master.
Related papers
- IC/DC: Surpassing Heuristic Solvers in Combinatorial Optimization with Diffusion Models [6.260482448679642]
We present IC/DC, a learning-based optimization framework that operates without any supervision.
IC/DC is specialized in addressing problems involving two distinct sets of items, and it does not need problem-specific search processes to generate valid solutions.
We train our model in a self-supervised way to minimize the cost of the solution while adhering to the problem-specific constraints.
arXiv Detail & Related papers (2024-10-15T06:53:30Z) - Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - CHARME: A chain-based reinforcement learning approach for the minor embedding problem [16.24890195949869]
We propose a novel approach utilizing Reinforcement Learning (RL) techniques to address the minor embedding problem, named CHARME.
CHARME includes three key components: a Graph Neural Network (GNN) architecture for policy modeling, a state transition algorithm ensuring solution validity, and an order exploration strategy for effective training.
In details, CHARME yields superior solutions compared to fast embedding methods such as Minorminer and ATOM.
arXiv Detail & Related papers (2024-06-11T10:12:10Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Instance-Conditioned Adaptation for Large-scale Generalization of Neural Combinatorial Optimization [15.842155380912002]
This work proposes a novel Instance-Conditioned Adaptation Model (ICAM) for better large-scale generalization of neural optimization.
In particular, we design a powerful yet lightweight instance-conditioned Routing adaptation module for the NCO model.
We develop an efficient three-stage reinforcement learning-based training scheme that enables the model to learn cross-scale features without any labeled optimal solution.
arXiv Detail & Related papers (2024-05-03T08:00:19Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Reversible Action Design for Combinatorial Optimization with
Reinforcement Learning [35.50454156611722]
Reinforcement learning (RL) has recently emerged as a new framework to tackle these problems.
We propose a general RL framework that not only exhibits state-of-the-art empirical performance but also generalizes to a variety class of COPs.
arXiv Detail & Related papers (2021-02-14T18:05:42Z) - MineReduce: an approach based on data mining for problem size reduction [58.720142291102135]
This paper presents an approach named MineReduce, which uses mined patterns to perform problem size reduction.
We present an application of MineReduce to improve a for the heterogeneous fleet vehicle routing problem.
arXiv Detail & Related papers (2020-05-15T08:49:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.