Graph Neural Networks Use Graphs When They Shouldn't
- URL: http://arxiv.org/abs/2309.04332v2
- Date: Sun, 25 Feb 2024 22:48:34 GMT
- Title: Graph Neural Networks Use Graphs When They Shouldn't
- Authors: Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson
- Abstract summary: Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.
We show that GNNs actually tend to overfit the given graph-structure.
- Score: 29.686091109844746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictions over graphs play a crucial role in various domains, including
social networks and medicine. Graph Neural Networks (GNNs) have emerged as the
dominant approach for learning on graph data. Although a graph-structure is
provided as input to the GNN, in some cases the best solution can be obtained
by ignoring it. While GNNs have the ability to ignore the graph- structure in
such cases, it is not clear that they will. In this work, we show that GNNs
actually tend to overfit the given graph-structure. Namely, they use it even
when a better solution can be obtained by ignoring it. We analyze the implicit
bias of gradient-descent learning of GNNs and prove that when the ground truth
function does not use the graphs, GNNs are not guaranteed to learn a solution
that ignores the graph, even with infinite data. We examine this phenomenon
with respect to different graph distributions and find that regular graphs are
more robust to this over-fitting. We also prove that within the family of
regular graphs, GNNs are guaranteed to extrapolate when learning with gradient
descent. Finally, based on our empirical and theoretical findings, we
demonstrate on real-data how regular graphs can be leveraged to reduce graph
overfitting and enhance performance.
Related papers
- Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Graph Anomaly Detection with Graph Neural Networks: Current Status and
Challenges [9.076649460696402]
Graph neural networks (GNNs) have been studied extensively and have successfully performed difficult machine learning tasks.
This survey is the first comprehensive review of graph anomaly detection methods based on GNNs.
arXiv Detail & Related papers (2022-09-29T16:47:57Z) - A Topological characterisation of Weisfeiler-Leman equivalence classes [0.0]
Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
arXiv Detail & Related papers (2022-06-23T17:28:55Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.