Enforcing convex constraints in Graph Neural Networks
- URL: http://arxiv.org/abs/2510.11227v1
- Date: Mon, 13 Oct 2025 10:10:46 GMT
- Title: Enforcing convex constraints in Graph Neural Networks
- Authors: Ahmed Rashwan, Keith Briggs, Chris Budd, Lisa Kreusser,
- Abstract summary: In this paper, we introduce ProjNet, a Graph Neural Network framework which satisfies input-dependant constraints.<n>We establish a convergence result for CAD and develop a Neural-accelerated implementation of handling large-scale inputs efficiently.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning applications require outputs that satisfy complex, dynamic constraints. This task is particularly challenging in Graph Neural Network models due to the variable output sizes of graph-structured data. In this paper, we introduce ProjNet, a Graph Neural Network framework which satisfies input-dependant constraints. ProjNet combines a sparse vector clipping method with the Component-Averaged Dykstra (CAD) algorithm, an iterative scheme for solving the best-approximation problem. We establish a convergence result for CAD and develop a GPU-accelerated implementation capable of handling large-scale inputs efficiently. To enable end-to-end training, we introduce a surrogate gradient for CAD that is both computationally efficient and better suited for optimization than the exact gradient. We validate ProjNet on four classes of constrained optimisation problems: linear programming, two classes of non-convex quadratic programs, and radio transmit power optimization, demonstrating its effectiveness across diverse problem settings.
Related papers
- A Distributed Training Architecture For Combinatorial Optimization [0.0]
We propose a distributed graph neural network (GNN)-based training framework for optimization.<n>Experiments are conducted on both real large-scale social network datasets and synthetically generated high-complexity graphs.<n>Our framework outperforms state-of-the-art approaches in both solution quality and computational efficiency.
arXiv Detail & Related papers (2025-11-12T12:22:10Z) - GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.<n>This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.<n>We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - Unifews: You Need Fewer Operations for Efficient Graph Neural Networks [9.66321358222326]
Graph Neural Networks (GNNs) have shown promising performance, but at the cost of resource-intensive operations on graph-scale matrices.<n>We propose Unifews, a joint sparsification technique to unify graph and weight matrix operations and enhance GNN learning efficiency.
arXiv Detail & Related papers (2024-03-20T03:07:30Z) - Symmetry-preserving graph attention network to solve routing problems at
multiple resolutions [1.9304772860080408]
We introduce the first-ever completely equivariant model and training to solve problems.
It is essential to capture the multiscale structure of the input graph.
We propose a Multiresolution scheme in combination with Equi Graph Attention network (mEGAT) architecture.
arXiv Detail & Related papers (2023-10-24T06:22:20Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - RESPECT: Reinforcement Learning based Edge Scheduling on Pipelined Coral
Edge TPUs [12.952987240366781]
This work presents a reinforcement learning (RL) based scheduling framework, which learns the behaviors of optimal optimization algorithms.
RL generates near-optimal scheduling results with short solving runtime overhead.
Our framework has demonstrated up to $sim2.5times$ real-world on-chip runtime inference speedups over the commercial compiler.
arXiv Detail & Related papers (2023-04-10T17:22:12Z) - Hector: An Efficient Programming and Compilation Framework for Implementing Relational Graph Neural Networks in GPU Architectures [24.841128441671234]
RGNNs are graph neural networks with dedicated structures for modeling the different types of nodes and edges in heterogeneous graphs.
We propose Hector, a novel two-level intermediate representation and its code generator framework, to capture the key properties of RGNN models.
Hector achieves up to 9.9x speed-up in inference and 43.7x speed-up in training compared with the state-of-the-art public systems.
arXiv Detail & Related papers (2023-01-16T06:53:18Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - MultiScale MeshGraphNets [65.26373813797409]
We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
arXiv Detail & Related papers (2022-10-02T20:16:20Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - DOGE-Train: Discrete Optimization on GPU with End-to-end Training [28.795080637690095]
We present a fast, scalable, data-driven approach for solving relaxations of 0-1 integer linear programs.
We use a combination of graph neural networks (GNN) and the Lagrange decomposition based algorithm FastDOG.
arXiv Detail & Related papers (2022-05-23T21:09:41Z) - Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted
Graph Convolutional Neural Networks [13.985534521589257]
Graph convolutional neural networks (GNN) have proven to be promising in the context of optimization.
We adapt various GNN including Graph Convolutional Networks, Signed Graph Convolutional Networks and Graph Isomorphic Networks.
We provide the first approach towards end-to-end trainable multicuts.
arXiv Detail & Related papers (2022-04-04T10:21:02Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.