Test-Time Training for Graph Neural Networks
- URL: http://arxiv.org/abs/2210.08813v1
- Date: Mon, 17 Oct 2022 07:58:07 GMT
- Title: Test-Time Training for Graph Neural Networks
- Authors: Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang,
Xing Xie
- Abstract summary: We introduce the first test-time training framework for GNNs to enhance the model generalization capacity for the graph classification task.
In particular, we design a novel test-time training strategy with self-supervised learning to adjust the GNN model for each test graph sample.
- Score: 46.479026988929235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have made tremendous progress in the graph
classification task. However, a performance gap between the training set and
the test set has often been noticed. To bridge such gap, in this work we
introduce the first test-time training framework for GNNs to enhance the model
generalization capacity for the graph classification task. In particular, we
design a novel test-time training strategy with self-supervised learning to
adjust the GNN model for each test graph sample. Experiments on the benchmark
datasets have demonstrated the effectiveness of the proposed framework,
especially when there are distribution shifts between training set and test
set. We have also conducted exploratory studies and theoretical analysis to
gain deeper understandings on the rationality of the design of the proposed
graph test time training framework (GT3).
Related papers
- Test-Time Training on Graphs with Large Language Models (LLMs) [68.375487369596]
Test-Time Training (TTT) has been proposed as a promising approach to train Graph Neural Networks (GNNs)
Inspired by the great annotation ability of Large Language Models (LLMs) on Text-Attributed Graphs (TAGs), we propose to enhance the test-time training on graphs with LLMs as annotators.
A two-stage training strategy is designed to tailor the test-time model with the limited and noisy labels.
arXiv Detail & Related papers (2024-04-21T08:20:02Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - GraphTTA: Test Time Adaptation on Graph Neural Networks [10.582212966736645]
We present a novel test time adaptation strategy named Graph Adversarial Pseudo Group Contrast (GAPGC) for graph neural networks (GNNs)
GAPGC employs a contrastive learning variant as a self-supervised task during TTA, equipped with Adversarial Learnable Augmenter and Group Pseudo-Positive Samples.
We provide theoretical evidence that GAPGC can extract minimal sufficient information for the main task from information theory perspective.
arXiv Detail & Related papers (2022-08-19T02:24:16Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.