Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks
- URL: http://arxiv.org/abs/2409.04495v1
- Date: Fri, 6 Sep 2024 14:58:31 GMT
- Title: Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks
- Authors: Runzhong Wang, Yang Li, Junchi Yan, Xiaokang Yang,
- Abstract summary: Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
- Score: 103.78912399195005
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc. The inherent hardness in CO problems brings up challenge for solving CO exactly, making deep-neural-network-based solvers a research frontier. In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints with the following merits. First, the positive linear constraint covers a wide range of CO problems, indicating that our approach breaks the generality bottleneck of existing non-autoregressive networks. Second, compared to existing autoregressive neural network solvers, our non-autoregressive networks have the advantages of higher efficiency and preserving permutation invariance. Third, our offline unsupervised learning has lower demand on high-quality labels, getting rid of the demand of optimal labels in supervised learning. Fourth, our online differentiable search method significantly improves the generalizability of our neural network solver to unseen problems. We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem. Our non-autoregressive neural solvers are competitive to and can be even superior to state-of-the-art solvers such as SCIP and Gurobi, especially when both efficiency and efficacy are considered. Code is available at https://github.com/Thinklab-SJTU/NAR-CO-Solver
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Network Interdiction Goes Neural [26.308933674471895]
Network interdiction problems are optimization problems involving two players.
One aims to solve an optimization problem on a network, while the other seeks to modify the network to thwart the first player's objectives.
arXiv Detail & Related papers (2024-05-26T02:34:26Z) - Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Optimization-Informed Neural Networks [0.6853165736531939]
We propose optimization-informed neural networks (OINN) to solve constrained nonlinear optimization problems.
In a nutshell, OINN transforms a CNLP into a neural network training problem.
The effectiveness of the proposed approach is demonstrated through a collection of classical problems.
arXiv Detail & Related papers (2022-10-05T09:28:55Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.