GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels
- URL: http://arxiv.org/abs/2310.14586v2
- Date: Thu, 26 Oct 2023 23:08:52 GMT
- Title: GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels
- Authors: Xin Zheng, Miao Zhang, Chunyang Chen, Soheila Molaei, Chuan Zhou,
Shirui Pan
- Abstract summary: We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
- Score: 81.93520935479984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the performance of graph neural networks (GNNs) is an essential
task for practical GNN model deployment and serving, as deployed GNNs face
significant performance uncertainty when inferring on unseen and unlabeled test
graphs, due to mismatched training-test graph distributions. In this paper, we
study a new problem, GNN model evaluation, that aims to assess the performance
of a specific GNN model trained on labeled and observed graphs, by precisely
estimating its performance (e.g., node classification accuracy) on unseen
graphs without labels. Concretely, we propose a two-stage GNN model evaluation
framework, including (1) DiscGraph set construction and (2) GNNEvaluator
training and inference. The DiscGraph set captures wide-range and diverse graph
data distribution discrepancies through a discrepancy measurement function,
which exploits the outputs of GNNs related to latent node embeddings and node
class predictions. Under the effective training supervision from the DiscGraph
set, GNNEvaluator learns to precisely estimate node classification accuracy of
the to-be-evaluated GNN model and makes an accurate inference for evaluating
GNN model performance. Extensive experiments on real-world unseen and unlabeled
test graphs demonstrate the effectiveness of our proposed method for GNN model
evaluation.
Related papers
- Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Task-Agnostic Graph Neural Network Evaluation via Adversarial
Collaboration [11.709808788756966]
GraphAC is a principled, task-agnostic, and stable framework for evaluating Graph Neural Network (GNN) research for molecular representation learning.
We introduce a novel objective function: the Competitive Barlow Twins, that allow two GNNs to jointly update themselves from direct competitions against each other.
arXiv Detail & Related papers (2023-01-27T03:33:11Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Distribution Preserving Graph Representation Learning [11.340722297341788]
Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
arXiv Detail & Related papers (2022-02-27T19:16:26Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - A Collective Learning Framework to Boost GNN Expressiveness [25.394456460032625]
We consider the task of inductive node classification using Graph Neural Networks (GNNs) in supervised and semi-supervised settings.
We propose a general collective learning approach to increase the representation power of any existing GNN.
We evaluate performance on five real-world network datasets and demonstrate consistent, significant improvement in node classification accuracy.
arXiv Detail & Related papers (2020-03-26T22:07:28Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.