Stochastic Graph Neural Networks
- URL: http://arxiv.org/abs/2006.02684v2
- Date: Sat, 19 Jun 2021 15:45:24 GMT
- Title: Stochastic Graph Neural Networks
- Authors: Zhan Gao, Elvin Isufi and Alejandro Ribeiro
- Abstract summary: Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
- Score: 123.39024384275054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) model nonlinear representations in graph data
with applications in distributed agent coordination, control, and planning
among others. Current GNN architectures assume ideal scenarios and ignore link
fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the
topological randomness is not considered accordingly. To overcome this issue,
we put forth the stochastic graph neural network (SGNN) model: a GNN where the
distributed graph convolution module accounts for the random network changes.
Since stochasticity brings in a new learning paradigm, we conduct a statistical
analysis on the SGNN output variance to identify conditions the learned filters
should satisfy for achieving robust transference to perturbed scenarios,
ultimately revealing the explicit impact of random link losses. We further
develop a stochastic gradient descent (SGD) based learning process for the SGNN
and derive conditions on the learning rate under which this learning process
converges to a stationary point. Numerical results corroborate our theoretical
findings and compare the benefits of SGNN robust transference with a
conventional GNN that ignores graph perturbations during learning.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations [122.12962842842349]
Graph convolutional neural networks (GCNNs) are nonlinear processing tools to learn representations from network data.
Current analysis considers deterministic perturbations but fails to provide relevant insights when topological changes are random.
This paper investigates the stability of GCNNs to perturbed graph perturbations induced by link losses.
arXiv Detail & Related papers (2021-06-19T16:25:28Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.