Convergence of Invariant Graph Networks
- URL: http://arxiv.org/abs/2201.10129v1
- Date: Tue, 25 Jan 2022 07:02:58 GMT
- Title: Convergence of Invariant Graph Networks
- Authors: Chen Cai, Yusu Wang
- Abstract summary: In this paper, we investigate the convergence of one powerful GNN, Invariant Graph Network (IGN) over graphs sampled from graphons.
We show that IGN-small still contains function class rich enough that can approximate spectral GNNs arbitrarily well.
- Score: 13.008323851750442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although theoretical properties such as expressive power and over-smoothing
of graph neural networks (GNN) have been extensively studied recently, its
convergence property is a relatively new direction. In this paper, we
investigate the convergence of one powerful GNN, Invariant Graph Network (IGN)
over graphs sampled from graphons.
We first prove the stability of linear layers for general $k$-IGN (of order
$k$) based on a novel interpretation of linear equivariant layers. Building
upon this result, we prove the convergence of $k$-IGN under the model of
\citet{ruiz2020graphon}, where we access the edge weight but the convergence
error is measured for graphon inputs.
Under the more natural (and more challenging) setting of
\citet{keriven2020convergence} where one can only access 0-1 adjacency matrix
sampled according to edge probability, we first show a negative result that the
convergence of any IGN is not possible. We then obtain the convergence of a
subset of IGNs, denoted as IGN-small, after the edge probability estimation. We
show that IGN-small still contains function class rich enough that can
approximate spectral GNNs arbitrarily well. Lastly, we perform experiments on
various graphon models to verify our statements.
Related papers
- Higher-Order Graphon Neural Networks: Approximation and Cut Distance [30.258169775217926]
We extend the $k$-WL test for graphons to the graphon-signal space and introduce signal-weighted homomorphism densities as a key tool.
As an exemplary focus, we generalize Invariant Graph Networks (IGNs) to graphons.
We show that IWNs of order $k$ are at least as powerful as the $k$-WL test, and we establish universal approximation results for graphon-signals in $Lp$ distances.
arXiv Detail & Related papers (2025-03-18T15:14:49Z) - On the Expressive Power of Subgraph Graph Neural Networks for Graphs with Bounded Cycles [17.29046077604317]
This work investigates $k$-hop subgraph GNNs that aggregate information from neighbors with distances up to $k$.
We prove that the $k$-hop subgraph GNNs can approximate any permutation-invariant/equivariant continuous function over graphs without cycles of length greater than $2k+1$ within any error tolerance.
arXiv Detail & Related papers (2025-02-06T01:25:22Z) - Graph neural networks and non-commuting operators [4.912318087940015]
We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem.
Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs.
We derive a training procedure that provably enforces the stability of the resulting model.
arXiv Detail & Related papers (2024-11-06T21:17:14Z) - Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Almost Surely Asymptotically Constant Graph Neural Networks [7.339728196535312]
We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express.
This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models.
We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.
arXiv Detail & Related papers (2024-03-06T17:40:26Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Zero-One Laws of Graph Neural Networks [7.783835522945603]
Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs.
We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs.
We show that when we draw graphs of increasing size from the ErdHos-R'enyi model, the probability that such graphs are mapped to a particular output tends to either zero or to one.
arXiv Detail & Related papers (2023-01-30T17:02:23Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - The Exact Class of Graph Functions Generated by Graph Neural Networks [43.25172578943894]
Graph Neural Network (GNN) whose output is identical to the graph function?
In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs.
We show that this condition can be efficiently verified by checking quadratically many constraints.
arXiv Detail & Related papers (2022-02-17T18:54:27Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.