Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations
- URL: http://arxiv.org/abs/2106.10526v1
- Date: Sat, 19 Jun 2021 16:25:28 GMT
- Title: Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations
- Authors: Zhan Gao, Elvin Isufi and Alejandro Ribeiro
- Abstract summary: Graph convolutional neural networks (GCNNs) are nonlinear processing tools to learn representations from network data.
Current analysis considers deterministic perturbations but fails to provide relevant insights when topological changes are random.
This paper investigates the stability of GCNNs to perturbed graph perturbations induced by link losses.
- Score: 122.12962842842349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional neural networks (GCNNs) are nonlinear processing tools to
learn representations from network data. A key property of GCNNs is their
stability to graph perturbations. Current analysis considers deterministic
perturbations but fails to provide relevant insights when topological changes
are random. This paper investigates the stability of GCNNs to stochastic graph
perturbations induced by link losses. In particular, it proves the expected
output difference between the GCNN over random perturbed graphs and the GCNN
over the nominal graph is upper bounded by a factor that is linear in the link
loss probability. We perform the stability analysis in the graph spectral
domain such that the result holds uniformly for any graph. This result also
shows the role of the nonlinearity and the architecture width and depth, and
allows identifying handle to improve the GCNN robustness. Numerical simulations
on source localization and robot swarm control corroborate our theoretical
findings.
Related papers
- Learning Stable Graph Neural Networks via Spectral Regularization [18.32587282139282]
Stability of graph neural networks (GNNs) characterizes how GNNs react to graph perturbations and provides guarantees for architecture performance in noisy scenarios.
This paper develops a self-regularized graph neural network (SR-GNN) that improves the architecture stability by regularizing the filter frequency responses in the graph spectral domain.
arXiv Detail & Related papers (2022-11-13T17:27:21Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Stability of Aggregation Graph Neural Networks [153.70485149740608]
We study the stability properties of aggregation graph neural networks (Agg-GNNs) considering perturbations of the underlying graph.
We prove that the stability bounds are defined by the properties of the filters in the first layer of the CNN that acts on each node.
We also conclude that in Agg-GNNs the selectivity of the mapping operators is tied to the properties of the filters only in the first layer of the CNN stage.
arXiv Detail & Related papers (2022-07-08T03:54:52Z) - Graph Convolutional Neural Networks Sensitivity under Probabilistic Error Model [24.215504503548864]
This paper proposes an analysis framework to investigate the sensitivity of GCNNs to probabilistic graph perturbations.
Our study establishes tight expected GSO error bounds, which are explicitly linked to the error model parameters, and reveals a linear relationship between GSO perturbations and the resulting output differences.
Experiments validate our theoretical derivations and the effectiveness of our approach.
arXiv Detail & Related papers (2022-03-15T12:40:10Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.