On the Trade-Off between Stability and Representational Capacity in
Graph Neural Networks
- URL: http://arxiv.org/abs/2312.02372v1
- Date: Mon, 4 Dec 2023 22:07:17 GMT
- Title: On the Trade-Off between Stability and Representational Capacity in
Graph Neural Networks
- Authors: Zhan Gao, Amanda Prorok, Elvin Isufi
- Abstract summary: We study the stability of EdgeNet: a general GNN framework that unifies more than twenty solutions.
By studying the effect of different EdgeNet categories on the stability, we show that GNNs with fewer degrees of freedom in their parameter space, linked to a lower representational capacity, are more stable.
- Score: 22.751509906413943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analyzing the stability of graph neural networks (GNNs) under topological
perturbations is key to understanding their transferability and the role of
each architecture component. However, stability has been investigated only for
particular architectures, questioning whether it holds for a broader spectrum
of GNNs or only for a few instances. To answer this question, we study the
stability of EdgeNet: a general GNN framework that unifies more than twenty
solutions including the convolutional and attention-based classes, as well as
graph isomorphism networks and hybrid architectures. We prove that all GNNs
within the EdgeNet framework are stable to topological perturbations. By
studying the effect of different EdgeNet categories on the stability, we show
that GNNs with fewer degrees of freedom in their parameter space, linked to a
lower representational capacity, are more stable. The key factor yielding this
trade-off is the eigenvector misalignment between the EdgeNet parameter
matrices and the graph shift operator. For example, graph convolutional neural
networks that assign a single scalar per signal shift (hence, with a perfect
alignment) are more stable than the more involved node or edge-varying
counterparts. Extensive numerical results corroborate our theoretical findings
and highlight the role of different architecture components in the trade-off.
Related papers
- Stability of Aggregation Graph Neural Networks [153.70485149740608]
We study the stability properties of aggregation graph neural networks (Agg-GNNs) considering perturbations of the underlying graph.
We prove that the stability bounds are defined by the properties of the filters in the first layer of the CNN that acts on each node.
We also conclude that in Agg-GNNs the selectivity of the mapping operators is tied to the properties of the filters only in the first layer of the CNN stage.
arXiv Detail & Related papers (2022-07-08T03:54:52Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations [122.12962842842349]
Graph convolutional neural networks (GCNNs) are nonlinear processing tools to learn representations from network data.
Current analysis considers deterministic perturbations but fails to provide relevant insights when topological changes are random.
This paper investigates the stability of GCNNs to perturbed graph perturbations induced by link losses.
arXiv Detail & Related papers (2021-06-19T16:25:28Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.