Unifying Graph Contrastive Learning via Graph Message Augmentation
- URL: http://arxiv.org/abs/2401.03638v1
- Date: Mon, 8 Jan 2024 02:49:16 GMT
- Title: Unifying Graph Contrastive Learning via Graph Message Augmentation
- Authors: Ziyan Zhang, Bo Jiang, Jin Tang and Bin Luo
- Abstract summary: Graph Data Augmentation (GDA) is an important issue for graph contrastive learning.
To our knowledge, it still lacks a universal and effective augmentor that is suitable for different types of graph data.
We propose a novel Graph Message Augmentation (GMA), a universal scheme for reformulating many existing GDAs.
We then propose a unified graph contrastive learning, termed Graph Message Contrastive Learning (GMCL), that employs attribution-guided universal GMA for graph contrastive learning.
- Score: 22.66138581419178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning is usually performed by first conducting Graph
Data Augmentation (GDA) and then employing a contrastive learning pipeline to
train GNNs. As we know that GDA is an important issue for graph contrastive
learning. Various GDAs have been developed recently which mainly involve
dropping or perturbing edges, nodes, node attributes and edge attributes.
However, to our knowledge, it still lacks a universal and effective augmentor
that is suitable for different types of graph data. To address this issue, in
this paper, we first introduce the graph message representation of graph data.
Based on it, we then propose a novel Graph Message Augmentation (GMA), a
universal scheme for reformulating many existing GDAs. The proposed unified GMA
not only gives a new perspective to understand many existing GDAs but also
provides a universal and more effective graph data augmentation for graph
self-supervised learning tasks. Moreover, GMA introduces an easy way to
implement the mixup augmentor which is natural for images but usually
challengeable for graphs. Based on the proposed GMA, we then propose a unified
graph contrastive learning, termed Graph Message Contrastive Learning (GMCL),
that employs attribution-guided universal GMA for graph contrastive learning.
Experiments on many graph learning tasks demonstrate the effectiveness and
benefits of the proposed GMA and GMCL approaches.
Related papers
- A Unified Graph Selective Prompt Learning for Graph Neural Networks [20.595782116049428]
Graph Prompt Feature (GPF) has achieved remarkable success in adapting pre-trained models for Graph Neural Networks (GNNs)
We propose a new unified Graph Selective Prompt Feature learning (GSPF) for GNN fine-tuning.
arXiv Detail & Related papers (2024-06-15T04:36:40Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Graph Domain Adaptation: Challenges, Progress and Prospects [61.9048172631524]
We propose graph domain adaptation as an effective knowledge-transfer paradigm across graphs.
GDA introduces a bunch of task-related graphs as source graphs and adapts the knowledge learnt from source graphs to the target graphs.
We outline the research status and challenges, propose a taxonomy, introduce the details of representative works, and discuss the prospects.
arXiv Detail & Related papers (2024-02-01T02:44:32Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs [37.48313839125563]
We develop a Graph-Aware Distillation framework (GRAD) to encode graph structures into an LM for graph-free, fast inference.
Different from conventional knowledge distillation, GRAD jointly optimize a GNN teacher and a graph-free student over the graph's nodes via a shared LM.
Experiments in eight node classification benchmarks in both transductive and inductive settings showcase GRAD's superiority over existing distillation approaches for textual graphs.
arXiv Detail & Related papers (2023-04-20T22:34:20Z) - Graph Contrastive Learning with Personalized Augmentation [17.714437631216516]
Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
arXiv Detail & Related papers (2022-09-14T11:37:48Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.