Graph Contrastive Learning with Personalized Augmentation
- URL: http://arxiv.org/abs/2209.06560v2
- Date: Thu, 15 Sep 2022 14:10:19 GMT
- Title: Graph Contrastive Learning with Personalized Augmentation
- Authors: Xin Zhang, Qiaoyu Tan, Xiao Huang, Bo Li
- Abstract summary: Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
- Score: 17.714437631216516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) has emerged as an effective tool for
learning unsupervised representations of graphs. The key idea is to maximize
the agreement between two augmented views of each graph via data augmentation.
Existing GCL models mainly focus on applying \textit{identical augmentation
strategies} for all graphs within a given scenario. However, real-world graphs
are often not monomorphic but abstractions of diverse natures. Even within the
same scenario (e.g., macromolecules and online communities), different graphs
might need diverse augmentations to perform effective GCL. Thus, blindly
augmenting all graphs without considering their individual characteristics may
undermine the performance of GCL arts.To deal with this, we propose the first
principled framework, termed as \textit{G}raph contrastive learning with
\textit{P}ersonalized \textit{A}ugmentation (GPA), to advance conventional GCL
by allowing each graph to choose its own suitable augmentation operations.In
essence, GPA infers tailored augmentation strategies for each graph based on
its topology and node attributes via a learnable augmentation selector, which
is a plug-and-play module and can be effectively trained with downstream GCL
models end-to-end. Extensive experiments across 11 benchmark graphs from
different types and domains demonstrate the superiority of GPA against
state-of-the-art competitors.Moreover, by visualizing the learned augmentation
distributions across different types of datasets, we show that GPA can
effectively identify the most suitable augmentations for each graph based on
its characteristics.
Related papers
- Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning [13.494832603509897]
Graph representation learning (GRL) has emerged as an effective technique achieving performance improvements in wide tasks such as node classification and graph classification.
We propose a novel method, Explanation-Preserving Augmentation (EPA), that leverages graph explanation techniques for generating augmented graphs.
EPA first uses a small number of labels to train a graph explainer to infer the sub-structures (explanations) that are most relevant to a graph's semantics.
arXiv Detail & Related papers (2024-10-16T15:18:03Z) - Graph Contrastive Learning with Cohesive Subgraph Awareness [34.76555185419192]
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs.
We argue that an awareness of subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance.
We propose a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms.
arXiv Detail & Related papers (2024-01-31T03:51:30Z) - Hybrid Augmented Automated Graph Contrastive Learning [3.785553471764994]
We propose a framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL)
HAGCL consists of a feature-level learnable view generator and an edge-level learnable view generator.
It insures to learn the most semantically meaningful structure in terms of features and topology.
arXiv Detail & Related papers (2023-03-24T03:26:20Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.