On Representing Linear Programs by Graph Neural Networks
- URL: http://arxiv.org/abs/2209.12288v2
- Date: Thu, 25 May 2023 20:00:12 GMT
- Title: On Representing Linear Programs by Graph Neural Networks
- Authors: Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
- Abstract summary: Graph neural network (GNN) is considered a suitable machine learning model for optimization problems.
This paper establishes the theoretical foundation of applying GNNs to solving linear program (LP) problems.
We show that properly built GNNs can reliably predict feasibility, boundedness, and an optimal solution for each LP in a broad class.
- Score: 30.998508318941017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize is a rapidly growing area that aims to solve
optimization problems or improve existing optimization algorithms using machine
learning (ML). In particular, the graph neural network (GNN) is considered a
suitable ML model for optimization problems whose variables and constraints are
permutation--invariant, for example, the linear program (LP). While the
literature has reported encouraging numerical results, this paper establishes
the theoretical foundation of applying GNNs to solving LPs. Given any size
limit of LPs, we construct a GNN that maps different LPs to different outputs.
We show that properly built GNNs can reliably predict feasibility, boundedness,
and an optimal solution for each LP in a broad class. Our proofs are based upon
the recently--discovered connections between the Weisfeiler--Lehman isomorphism
test and the GNN. To validate our results, we train a simple GNN and present
its accuracy in mapping LPs to their feasibilities and solutions.
Related papers
- Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Mixed-Integer Optimisation of Graph Neural Networks for Computer-Aided
Molecular Design [4.593587844188084]
ReLU neural networks have been modelled as constraints in mixed integer linear programming (MILP)
We propose a formulation for ReLU Graph Convolutional Neural Networks and a MILP formulation for ReLU GraphSAGE models.
These formulations enable solving optimisation problems with trained GNNs embedded to global optimality.
arXiv Detail & Related papers (2023-12-02T21:10:18Z) - Exploring the Power of Graph Neural Networks in Solving Linear
Optimization Problems [5.966097889241178]
graph neural networks (MPNNs) speed up solving mixed-integer optimization problems.
We show that MPNNs' effectiveness in emulating linear optimization remain largely unclear.
We highlight how MPNNs can serve as a lightweight proxy for solving linear optimization problems.
arXiv Detail & Related papers (2023-10-16T17:31:25Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.