Stability of Neural Networks on Manifolds to Relative Perturbations
- URL: http://arxiv.org/abs/2110.04702v1
- Date: Sun, 10 Oct 2021 04:37:19 GMT
- Title: Stability of Neural Networks on Manifolds to Relative Perturbations
- Authors: Zhiyang Wang and Luana Ruiz and Alejandro Ribeiro
- Abstract summary: Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
- Score: 118.84154142918214
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) show impressive performance in many practical
scenarios, which can be largely attributed to their stability properties.
Empirically, GNNs can scale well on large size graphs, but this is contradicted
by the fact that existing stability bounds grow with the number of nodes.
Graphs with well-defined limits can be seen as samples from manifolds. Hence,
in this paper, we analyze the stability properties of convolutional neural
networks on manifolds to understand the stability of GNNs on large graphs.
Specifically, we focus on stability to relative perturbations of the
Laplace-Beltrami operator. To start, we construct frequency ratio threshold
filters which separate the infinite-dimensional spectrum of the
Laplace-Beltrami operator. We then prove that manifold neural networks composed
of these filters are stable to relative operator perturbations. As a product of
this analysis, we observe that manifold neural networks exhibit a trade-off
between stability and discriminability. Finally, we illustrate our results
empirically in a wireless resource allocation scenario where the
transmitter-receiver pairs are assumed to be sampled from a manifold.
Related papers
- SAGMAN: Stability Analysis of Graph Neural Networks on the Manifolds [11.839398175390548]
Modern graph neural networks (GNNs) can be sensitive to changes in the input graph structure and node features.
We introduce a spectral framework known as SAGMAN for examining the stability of GNNs.
arXiv Detail & Related papers (2024-02-13T18:33:45Z) - On the Trade-Off between Stability and Representational Capacity in
Graph Neural Networks [22.751509906413943]
We study the stability of EdgeNet: a general GNN framework that unifies more than twenty solutions.
By studying the effect of different EdgeNet categories on the stability, we show that GNNs with fewer degrees of freedom in their parameter space, linked to a lower representational capacity, are more stable.
arXiv Detail & Related papers (2023-12-04T22:07:17Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Stability of Aggregation Graph Neural Networks [153.70485149740608]
We study the stability properties of aggregation graph neural networks (Agg-GNNs) considering perturbations of the underlying graph.
We prove that the stability bounds are defined by the properties of the filters in the first layer of the CNN that acts on each node.
We also conclude that in Agg-GNNs the selectivity of the mapping operators is tied to the properties of the filters only in the first layer of the CNN stage.
arXiv Detail & Related papers (2022-07-08T03:54:52Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Space-Time Graph Neural Networks [104.55175325870195]
We introduce space-time graph neural network (ST-GNN) to jointly process the underlying space-time topology of time-varying network data.
Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs.
arXiv Detail & Related papers (2021-10-06T16:08:44Z) - Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations [122.12962842842349]
Graph convolutional neural networks (GCNNs) are nonlinear processing tools to learn representations from network data.
Current analysis considers deterministic perturbations but fails to provide relevant insights when topological changes are random.
This paper investigates the stability of GCNNs to perturbed graph perturbations induced by link losses.
arXiv Detail & Related papers (2021-06-19T16:25:28Z) - On the Stability of Graph Convolutional Neural Networks under Edge
Rewiring [22.58110328955473]
Graph neural networks are experiencing a surge of popularity within the machine learning community.
Despite this, their stability, i.e., their robustness to small perturbations in the input, is not yet well understood.
We develop an interpretable upper bound elucidating that graph neural networks are stable to rewiring between high degree nodes.
arXiv Detail & Related papers (2020-10-26T17:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.