AutoGCL: Automated Graph Contrastive Learning via Learnable View
Generators
- URL: http://arxiv.org/abs/2109.10259v1
- Date: Tue, 21 Sep 2021 15:34:11 GMT
- Title: AutoGCL: Automated Graph Contrastive Learning via Learnable View
Generators
- Authors: Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, Xiang Zhang
- Abstract summary: We propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper.
AutoGCL employs a set of learnable graph view generators orchestrated by an auto augmentation strategy.
Experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate the superiority of our framework over the state-of-the-arts in graph contrastive learning.
- Score: 22.59182542071303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has been widely applied to graph representation
learning, where the view generators play a vital role in generating effective
contrastive samples. Most of the existing contrastive learning methods employ
pre-defined view generation methods, e.g., node drop or edge perturbation,
which usually cannot adapt to input data or preserve the original semantic
structures well. To address this issue, we propose a novel framework named
Automated Graph Contrastive Learning (AutoGCL) in this paper. Specifically,
AutoGCL employs a set of learnable graph view generators orchestrated by an
auto augmentation strategy, where every graph view generator learns a
probability distribution of graphs conditioned by the input. While the graph
view generators in AutoGCL preserve the most representative structures of the
original graph in generation of every contrastive sample, the auto augmentation
learns policies to introduce adequate augmentation variances in the whole
contrastive learning procedure. Furthermore, AutoGCL adopts a joint training
strategy to train the learnable view generators, the graph encoder, and the
classifier in an end-to-end manner, resulting in topological heterogeneity yet
semantic similarity in the generation of contrastive samples. Extensive
experiments on semi-supervised learning, unsupervised learning, and transfer
learning demonstrate the superiority of our AutoGCL framework over the
state-of-the-arts in graph contrastive learning. In addition, the visualization
results further confirm that the learnable view generators can deliver more
compact and semantically meaningful contrastive samples compared against the
existing view generation methods.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Generative-Enhanced Heterogeneous Graph Contrastive Learning [11.118517297006894]
Heterogeneous Graphs (HGs) can effectively model complex relationships in the real world by multi-type nodes and edges.
In recent years, inspired by self-supervised learning, contrastive Heterogeneous Graphs Neural Networks (HGNNs) have shown great potential by utilizing data augmentation and contrastive discriminators for downstream tasks.
We propose a novel Generative-Enhanced Heterogeneous Graph Contrastive Learning (GHGCL)
arXiv Detail & Related papers (2024-04-03T15:31:18Z) - Hybrid Augmented Automated Graph Contrastive Learning [3.785553471764994]
We propose a framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL)
HAGCL consists of a feature-level learnable view generator and an edge-level learnable view generator.
It insures to learn the most semantically meaningful structure in terms of features and topology.
arXiv Detail & Related papers (2023-03-24T03:26:20Z) - Self-supervised Semi-implicit Graph Variational Auto-encoders with
Masking [18.950919307926824]
We propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE)
SeeGera co-embeds both nodes and features in the encoder and reconstructs both links and features in the decoder.
We conduct extensive experiments comparing SeeGera with 9 other state-of-the-art competitors.
arXiv Detail & Related papers (2023-01-29T15:00:43Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Graph Contrastive Learning with Personalized Augmentation [17.714437631216516]
Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
arXiv Detail & Related papers (2022-09-14T11:37:48Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.