On Representing Mixed-Integer Linear Programs by Graph Neural Networks
- URL: http://arxiv.org/abs/2210.10759v2
- Date: Thu, 25 May 2023 20:01:43 GMT
- Title: On Representing Mixed-Integer Linear Programs by Graph Neural Networks
- Authors: Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
- Abstract summary: Mixed-integer linear programming (MILP) is NP-hard in general, but practical MILP has received roughly 100-fold speedup in the past twenty years.
This work discovers a fundamental limitation: there exist feasible and infeasible MILPs that all GNNs will, however, treat equally.
We show that, by restricting the MILPs to unfoldable ones or by adding random features, there exist GNNs that can reliably predict MILP feasibility, optimal objective values, and optimal solutions up to prescribed precision.
- Score: 30.998508318941017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Mixed-integer linear programming (MILP) is NP-hard in general,
practical MILP has received roughly 100--fold speedup in the past twenty years.
Still, many classes of MILPs quickly become unsolvable as their sizes increase,
motivating researchers to seek new acceleration techniques for MILPs. With deep
learning, they have obtained strong empirical results, and many results were
obtained by applying graph neural networks (GNNs) to making decisions in
various stages of MILP solution processes. This work discovers a fundamental
limitation: there exist feasible and infeasible MILPs that all GNNs will,
however, treat equally, indicating GNN's lacking power to express general
MILPs. Then, we show that, by restricting the MILPs to unfoldable ones or by
adding random features, there exist GNNs that can reliably predict MILP
feasibility, optimal objective values, and optimal solutions up to prescribed
precision. We conducted small-scale numerical experiments to validate our
theoretical findings.
Related papers
- FMIP: Joint Continuous-Integer Flow For Mixed-Integer Linear Programming [52.52020895303244]
Mixed-Integer Linear Programming (MILP) is a foundational tool for complex decision-making problems.<n>We propose Joint Continuous-Integer Flow for Mixed-Integer Linear Programming (FMIP), which is the first generative framework that models joint distribution of both integer and continuous variables for MILP solutions.<n>FMIP is fully compatible with arbitrary backbone networks and various downstream solvers, making it well-suited for a broad range of real-world MILP applications.
arXiv Detail & Related papers (2025-07-31T10:03:30Z) - MILP-SAT-GNN: Yet Another Neural SAT Solver [0.0]
We propose a novel method that enables Graph Neural Networks (GNNs) to solve SAT problems by leveraging a technique developed for applying GNNs to Mixed Linear Programming (MILP)<n>Specifically, k-CNF formulae are mapped into MILP problems, which are then encoded as weighted bipartite graphs and fed into a GNN for training and testing.
arXiv Detail & Related papers (2025-07-02T15:39:45Z) - GNNMerge: Merging of GNN Models Without Accessing Training Data [12.607714697138428]
Model merging has gained prominence in machine learning as a method to integrate multiple trained models into a single model without accessing the original training data.
Existing approaches have demonstrated success in domains such as computer vision and NLP, their application to Graph Neural Networks (GNNs) remains unexplored.
We propose GNNMerge, which utilizes a task-agnostic node embedding alignment strategy to merge GNNs.
arXiv Detail & Related papers (2025-03-05T11:02:29Z) - Learning Accurate, Efficient, and Interpretable MLPs on Multiplex Graphs via Node-wise Multi-View Ensemble Distillation [29.70212915594676]
Multiplex Graph Neural Networks (MGNNs) have achieved advanced performance in various downstream tasks.
We propose Multiplex Graph-Free Graph Neural Networks (MGFNNs) and MGFNN+ to combine MGNNs' superior performance and Neural Networks' efficient inference.
arXiv Detail & Related papers (2025-02-09T11:55:14Z) - Scalable Mechanistic Neural Networks for Differential Equations and Machine Learning [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
We reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Heuristic Learning with Graph Neural Networks: A Unified Framework for Link Prediction [25.87108956561691]
Link prediction is a fundamental task in graph learning, inherently shaped by the topology of the graph.
We propose a unified matrix formulation to accommodate and generalize various weights.
We also propose the Heuristic Learning Graph Neural Network (HL-GNN) to efficiently implement the formulation.
arXiv Detail & Related papers (2024-06-12T08:05:45Z) - Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs [40.99368410911088]
Quadratic programming (QP) is the most widely applied category of problems in nonlinear programming.
Recent studies of applying graph neural networks (GNNs) for QP tasks show that GNNs can capture key characteristics of an optimization instance.
We prove the existence of message-passing GNNs that can reliably represent key properties of quadratic programs.
arXiv Detail & Related papers (2024-06-09T23:57:47Z) - AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation [15.505402580010104]
A new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged.
They aim to transfer GNN-learned knowledge to a more efficient student.
These methods face challenges in situations with insufficient training data and incomplete test data.
We propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework.
arXiv Detail & Related papers (2024-05-23T08:28:44Z) - Rethinking the Capacity of Graph Neural Networks for Branching Strategy [40.99368410911088]
Graph neural networks (GNNs) have been widely used to predict properties of mixed-integer linear programs (MILPs)
This paper investigates the capacity of GNNs to represent strong branching (SB)
We define a class of "MP-tractable" MILPs for which MP-GNNs can accurately approximate SB scores.
arXiv Detail & Related papers (2024-02-11T04:09:50Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.