Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs
- URL: http://arxiv.org/abs/2304.10668v1
- Date: Thu, 20 Apr 2023 22:34:20 GMT
- Title: Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs
- Authors: Costas Mavromatis, Vassilis N. Ioannidis, Shen Wang, Da Zheng, Soji
Adeshina, Jun Ma, Han Zhao, Christos Faloutsos, George Karypis
- Abstract summary: We develop a Graph-Aware Distillation framework (GRAD) to encode graph structures into an LM for graph-free, fast inference.
Different from conventional knowledge distillation, GRAD jointly optimize a GNN teacher and a graph-free student over the graph's nodes via a shared LM.
Experiments in eight node classification benchmarks in both transductive and inductive settings showcase GRAD's superiority over existing distillation approaches for textual graphs.
- Score: 37.48313839125563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we learn effective node representations on textual graphs? Graph
Neural Networks (GNNs) that use Language Models (LMs) to encode textual
information of graphs achieve state-of-the-art performance in many node
classification tasks. Yet, combining GNNs with LMs has not been widely explored
for practical deployments due to its scalability issues. In this work, we
tackle this challenge by developing a Graph-Aware Distillation framework (GRAD)
to encode graph structures into an LM for graph-free, fast inference. Different
from conventional knowledge distillation, GRAD jointly optimizes a GNN teacher
and a graph-free student over the graph's nodes via a shared LM. This
encourages the graph-free student to exploit graph information encoded by the
GNN teacher while at the same time, enables the GNN teacher to better leverage
textual information from unlabeled nodes. As a result, the teacher and the
student models learn from each other to improve their overall performance.
Experiments in eight node classification benchmarks in both transductive and
inductive settings showcase GRAD's superiority over existing distillation
approaches for textual graphs.
Related papers
- GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment [30.56443056293688]
Graph self-supervised learning (SSL) holds considerable promise for mining and learning with graph-structured data.
In this work, we aim to pretrain one graph neural network (GNN) on a varied collection of graphs endowed with rich node features.
We present a general GraphAlign method that can be seamlessly integrated into the existing graph SSL framework.
arXiv Detail & Related papers (2024-06-05T05:22:32Z) - Hypergraph-enhanced Dual Semi-supervised Graph Classification [14.339207883093204]
We propose a Hypergraph-Enhanced DuAL framework named HEAL for semi-supervised graph classification.
To better explore the higher-order relationships among nodes, we design a hypergraph structure learning to adaptively learn complex node dependencies.
Based on the learned hypergraph, we introduce a line graph to capture the interaction between hyperedges.
arXiv Detail & Related papers (2024-05-08T02:44:13Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Compressing Deep Graph Neural Networks via Adversarial Knowledge
Distillation [41.00398052556643]
We propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD.
The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator.
The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
arXiv Detail & Related papers (2022-05-24T00:04:43Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.