Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs
- URL: http://arxiv.org/abs/2310.10791v1
- Date: Mon, 16 Oct 2023 19:54:21 GMT
- Title: Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs
- Authors: Shervin Khalafi, Saurabh Sihag, Alejandro Ribeiro
- Abstract summary: We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
- Score: 94.44374472696272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural tangent kernels (NTKs) provide a theoretical regime to analyze the
learning and generalization behavior of over-parametrized neural networks. For
a supervised learning task, the association between the eigenvectors of the NTK
kernel and given data (a concept referred to as alignment in this paper) can
govern the rate of convergence of gradient descent, as well as generalization
to unseen data. Building upon this concept, we investigate NTKs and alignment
in the context of graph neural networks (GNNs), where our analysis reveals that
optimizing alignment translates to optimizing the graph representation or the
graph shift operator in a GNN. Our results further establish the theoretical
guarantees on the optimality of the alignment for a two-layer GNN and these
guarantees are characterized by the graph shift operator being a function of
the cross-covariance between the input and the output data. The theoretical
insights drawn from the analysis of NTKs are validated by our experiments
focused on a multi-variate time series prediction task for a publicly available
dataset. Specifically, they demonstrate that GNNs with cross-covariance as the
graph shift operator indeed outperform those that operate on the covariance
matrix from only the input data.
Related papers
- Graph neural networks and non-commuting operators [4.912318087940015]
We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem.
Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs.
We derive a training procedure that provably enforces the stability of the resulting model.
arXiv Detail & Related papers (2024-11-06T21:17:14Z) - Graph Neural Network-Inspired Kernels for Gaussian Processes in
Semi-Supervised Learning [4.644263115284322]
Graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning.
We introduce this inductive bias into GPs to improve their predictive performance for graph-structured data.
We show that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in time, compared with the respective GNNs.
arXiv Detail & Related papers (2023-02-12T01:07:56Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Analysis of Convolutions, Non-linearity and Depth in Graph Neural
Networks using Neural Tangent Kernel [8.824340350342512]
Graph Neural Networks (GNNs) are designed to exploit the structural information of the data by aggregating the neighboring nodes.
We theoretically analyze the influence of different aspects of the GNN architecture using the Graph Neural Kernel in a semi-supervised node classification setting.
We prove that: (i) linear networks capture the class information as good as ReLU networks; (ii) row normalization preserves the underlying class structure better than other convolutions; (iii) performance degrades with network depth due to over-smoothing; (iv) skip connections retain the class information even at infinite depth, thereby eliminating over-smooth
arXiv Detail & Related papers (2022-10-18T12:28:37Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Learning Graph Neural Networks with Approximate Gradient Descent [24.49427608361397]
Two types of graph neural networks (GNNs) are investigated, depending on whether labels are attached to nodes or graphs.
A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed.
The proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs.
arXiv Detail & Related papers (2020-12-07T02:54:48Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.