Deep Constraint-based Propagation in Graph Neural Networks
- URL: http://arxiv.org/abs/2005.02392v6
- Date: Wed, 1 Sep 2021 14:00:18 GMT
- Title: Deep Constraint-based Propagation in Graph Neural Networks
- Authors: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci and Marco Maggini
- Abstract summary: We propose a novel approach to learning in Graph Neural Networks (GNNs) based on constrained optimization in the Lagrangian framework.
Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers.
An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.
- Score: 15.27048776159285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The popularity of deep learning techniques renewed the interest in neural
architectures able to process complex structures that can be represented using
graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the
originally proposed GNN model of Scarselli et al. 2009, which encodes the state
of the nodes of the graph by means of an iterative diffusion procedure that,
during the learning stage, must be computed at every epoch, until the fixed
point of a learnable state transition function is reached, propagating the
information among the neighbouring nodes. We propose a novel approach to
learning in GNNs, based on constrained optimization in the Lagrangian
framework. Learning both the transition function and the node states is the
outcome of a joint process, in which the state convergence procedure is
implicitly expressed by a constraint satisfaction mechanism, avoiding iterative
epoch-wise procedures and the network unfolding. Our computational structure
searches for saddle points of the Lagrangian in the adjoint space composed of
weights, nodes state variables and Lagrange multipliers. This process is
further enhanced by multiple layers of constraints that accelerate the
diffusion process. An experimental analysis shows that the proposed approach
compares favourably with popular models on several benchmarks.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Spatiotemporal Learning on Cell-embedded Graphs [6.8090864965073274]
We introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features.
Experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Distance Recomputator and Topology Reconstructor for Graph Neural Networks [22.210886585639063]
We introduce Distance Recomputator and Topology Reconstructor methodologies, aimed at enhancing Graph Neural Networks (GNNs)
The Distance Recomputator dynamically recalibrates node distances using a dynamic encoding scheme, thereby improving the accuracy and adaptability of node representations.
The Topology Reconstructor adjusts local graph structures based on computed "similarity distances," optimizing network configurations for improved learning outcomes.
arXiv Detail & Related papers (2024-06-25T05:12:51Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Graph Neural Network Based Node Deployment for Throughput Enhancement [20.56966053013759]
We propose a novel graph neural network (GNN) method for the network node deployment problem.
We show that an expressive GNN has the capacity to approximate both the function value and the traffic permutation, as a theoretic support for the proposed method.
arXiv Detail & Related papers (2022-08-19T08:06:28Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Progressive Spatio-Temporal Graph Convolutional Network for
Skeleton-Based Human Action Recognition [97.14064057840089]
We propose a method to automatically find a compact and problem-specific network for graph convolutional networks in a progressive manner.
Experimental results on two datasets for skeleton-based human action recognition indicate that the proposed method has competitive or even better classification performance.
arXiv Detail & Related papers (2020-11-11T09:57:49Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z) - A Lagrangian Approach to Information Propagation in Graph Neural
Networks [21.077268852378385]
In this paper, we propose a novel approach to the state computation and the learning algorithm for Graph Neural Network (GNN) models.
The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure.
In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states) and Lagrange multipliers.
arXiv Detail & Related papers (2020-02-18T16:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.