A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
- URL: http://arxiv.org/abs/2202.07893v1
- Date: Wed, 16 Feb 2022 07:00:52 GMT
- Title: A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
- Authors: Jun Xia, Yanqiao Zhu, Yuanqi Du, Stan Z. Li
- Abstract summary: We provide the first comprehensive survey for Pretrained Graph Models (PGMs)
We firstly present the limitations of graph representation learning and thus introduce the motivation for graph pre-training.
Next, we present the applications of PGMs in social recommendation and drug discovery.
- Score: 38.57023440288189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pretrained Language Models (PLMs) such as BERT have revolutionized the
landscape of Natural Language Processing (NLP). Inspired by their
proliferation, tremendous efforts have been devoted to Pretrained Graph Models
(PGMs). Owing to the powerful model architectures of PGMs, abundant knowledge
from massive labeled and unlabeled graph data can be captured. The knowledge
implicitly encoded in model parameters can benefit various downstream tasks and
help to alleviate several fundamental issues of learning on graphs. In this
paper, we provide the first comprehensive survey for PGMs. We firstly present
the limitations of graph representation learning and thus introduce the
motivation for graph pre-training. Then, we systematically categorize existing
PGMs based on a taxonomy from four different perspectives. Next, we present the
applications of PGMs in social recommendation and drug discovery. Finally, we
outline several promising research directions that can serve as a guideline for
future research.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - A Survey on Self-Supervised Graph Foundation Models: Knowledge-Based Perspective [14.403179370556332]
Graph self-supervised learning (SSL) is now a go-to method for pre-training graph foundation models (GFMs)
We propose a knowledge-based taxonomy, which categorizes self-supervised graph models by the specific graph knowledge utilized.
arXiv Detail & Related papers (2024-03-24T13:10:09Z) - A Survey of Data-Efficient Graph Learning [16.053913182723143]
We introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier.
We systematically review recent advances on several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning.
arXiv Detail & Related papers (2024-02-01T09:28:48Z) - Graph Domain Adaptation: Challenges, Progress and Prospects [61.9048172631524]
We propose graph domain adaptation as an effective knowledge-transfer paradigm across graphs.
GDA introduces a bunch of task-related graphs as source graphs and adapts the knowledge learnt from source graphs to the target graphs.
We outline the research status and challenges, propose a taxonomy, introduce the details of representative works, and discuss the prospects.
arXiv Detail & Related papers (2024-02-01T02:44:32Z) - Towards Graph Foundation Models: A Survey and Beyond [66.37994863159861]
Foundation models have emerged as critical components in a variety of artificial intelligence applications.
The capabilities of foundation models to generalize and adapt motivate graph machine learning researchers to discuss the potential of developing a new graph learning paradigm.
This article introduces the concept of Graph Foundation Models (GFMs), and offers an exhaustive explanation of their key characteristics and underlying technologies.
arXiv Detail & Related papers (2023-10-18T09:31:21Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help
Multiple Graph Applications [38.83545631999851]
We propose a framework of graph-aware language model pre-training on a large graph corpus.
We conduct experiments on Amazon's real internal datasets and large public datasets.
arXiv Detail & Related papers (2023-06-05T04:46:44Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.