When Do We Need Graph Neural Networks for Node Classification?
- URL: http://arxiv.org/abs/2210.16979v2
- Date: Fri, 3 Nov 2023 22:32:12 GMT
- Title: When Do We Need Graph Neural Networks for Node Classification?
- Authors: Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang,
Doina Precup
- Abstract summary: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs)
In some cases, GNNs have little performance gain or even underperform graph-agnostic NNs.
- Score: 38.68793097833027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by
additionally making use of graph structure based on the relational inductive
bias (edge bias), rather than treating the nodes as collections of independent
and identically distributed (i.i.d.) samples. Though GNNs are believed to
outperform basic NNs in real-world tasks, it is found that in some cases, GNNs
have little performance gain or even underperform graph-agnostic NNs. To
identify these cases, based on graph signal processing and statistical
hypothesis testing, we propose two measures which analyze the cases in which
the edge bias in features and labels does not provide advantages. Based on the
measures, a threshold value can be given to predict the potential performance
advantages of graph-aware models over graph-agnostic models.
Related papers
- Revisiting Neighborhood Aggregation in Graph Neural Networks for Node Classification using Statistical Signal Processing [4.184419714263417]
We reevaluating the concept of neighborhood aggregation, which is a fundamental component in graph neural networks (GNNs)
Our analysis reveals conceptual flaws within certain benchmark GNN models when operating under the assumption of edge-independent node labels.
arXiv Detail & Related papers (2024-07-21T22:37:24Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Understanding Non-linearity in Graph Neural Networks from the
Bayesian-Inference Perspective [33.01636846541052]
Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs.
We investigate the functions of non-linearity in GNNs for node classification tasks.
arXiv Detail & Related papers (2022-07-22T19:36:12Z) - Graph Neural Networks with Parallel Neighborhood Aggregations for Graph
Classification [14.112444998191698]
We focus on graph classification using a graph neural network (GNN) model that precomputes the node features using a bank of neighborhood aggregation graph operators arranged in parallel.
These GNN models have a natural advantage of reduced training and inference time due to the precomputations.
We demonstrate via numerical experiments that the developed model achieves state-of-the-art performance on many diverse real-world datasets.
arXiv Detail & Related papers (2021-11-22T19:19:40Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.