Combinatorial Optimization with Physics-Inspired Graph Neural Networks
- URL: http://arxiv.org/abs/2107.01188v1
- Date: Fri, 2 Jul 2021 16:54:35 GMT
- Title: Combinatorial Optimization with Physics-Inspired Graph Neural Networks
- Authors: Martin J. A. Schuetz, J. Kyle Brubaker, Helmut G. Katzgraber
- Abstract summary: We show how graph neural networks can be used to solve optimization problems.
We find that the neural network performs on par or outperforms existing solvers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate how graph neural networks can be used to solve combinatorial
optimization problems. Our approach is broadly applicable to canonical NP-hard
problems in the form of quadratic unconstrained binary optimization problems,
such as maximum cut, minimum vertex cover, maximum independent set, as well as
Ising spin glasses and higher-order generalizations thereof in the form of
polynomial unconstrained binary optimization problems. We apply a relaxation
strategy to the problem Hamiltonian to generate a differentiable loss function
with which we train the graph neural network and apply a simple projection to
integer variables once the unsupervised training process has completed. We
showcase our approach with numerical results for the canonical maximum cut and
maximum independent set problems. We find that the graph neural network
optimizer performs on par or outperforms existing solvers, with the ability to
scale beyond the state of the art to problems with millions of variables.
Related papers
- Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Are Graph Neural Networks Optimal Approximation Algorithms? [26.5364112420121]
We design graph neural network architectures that capture optimal approximation algorithms for a class of optimization problems.
We take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing bounds on the optimal solution from the learned embeddings of OptGNN.
arXiv Detail & Related papers (2023-10-01T00:12:31Z) - Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted
Graph Convolutional Neural Networks [13.985534521589257]
Graph convolutional neural networks (GNN) have proven to be promising in the context of optimization.
We adapt various GNN including Graph Convolutional Networks, Signed Graph Convolutional Networks and Graph Isomorphic Networks.
We provide the first approach towards end-to-end trainable multicuts.
arXiv Detail & Related papers (2022-04-04T10:21:02Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial
Optimization on Graphs [35.14404918074861]
This work proposes an unsupervised learning framework for Combinatorial optimization problems on graphs.
Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets.
We show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution.
arXiv Detail & Related papers (2020-06-18T16:13:36Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Gumbel-softmax-based Optimization: A Simple General Framework for
Optimization Problems on Graphs [5.486093983007419]
We propose a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks.
High-quality solutions can be obtained with much less time consuming compared to traditional approaches.
arXiv Detail & Related papers (2020-04-14T14:11:00Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.