Deep Graph Contrastive Representation Learning
- URL: http://arxiv.org/abs/2006.04131v2
- Date: Mon, 13 Jul 2020 16:32:20 GMT
- Title: Deep Graph Contrastive Representation Learning
- Authors: Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: We propose a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level.
Specifically, we generate two graph views by corruption and learn node representations by maximizing the agreement of node representations in these two views.
We perform empirical experiments on both transductive and inductive learning tasks using a variety of real-world datasets.
- Score: 23.37786673825192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning nowadays becomes fundamental in analyzing
graph-structured data. Inspired by recent success of contrastive methods, in
this paper, we propose a novel framework for unsupervised graph representation
learning by leveraging a contrastive objective at the node level. Specifically,
we generate two graph views by corruption and learn node representations by
maximizing the agreement of node representations in these two views. To provide
diverse node contexts for the contrastive objective, we propose a hybrid scheme
for generating graph views on both structure and attribute levels. Besides, we
provide theoretical justification behind our motivation from two perspectives,
mutual information and the classical triplet loss. We perform empirical
experiments on both transductive and inductive learning tasks using a variety
of real-world datasets. Experimental experiments demonstrate that despite its
simplicity, our proposed method consistently outperforms existing
state-of-the-art methods by large margins. Moreover, our unsupervised method
even surpasses its supervised counterparts on transductive tasks, demonstrating
its great potential in real-world applications.
Related papers
- GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling [23.450755275125577]
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks.
We present a novel approach named Graph Pooling ContraSt (GPS) to address these issues.
Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views.
arXiv Detail & Related papers (2024-01-29T10:00:53Z) - Contrastive Disentangled Learning on Graph for Node Classification [11.678287036601564]
We propose a novel framework for contrastive disentangled learning on graphs, employing a disentangled graph encoder and two carefully crafted self-supervision signals.
Specifically, we introduce a disentangled graph encoder to enforce the framework to distinguish various latent factors corresponding to underlying semantic information.
To overcome the heavy reliance on labels, we design two self-supervision signals, namely node specificity and channel independence, which capture informative knowledge without the need for labeled data.
arXiv Detail & Related papers (2023-06-20T07:25:14Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph
Representation Learning [48.09362183184101]
We propose a novel self-supervised approach to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.
Our method achieves new state-of-the-art results and surpasses some semi-supervised counterparts by large margins.
arXiv Detail & Related papers (2021-05-12T14:20:13Z) - Graph Representation Learning by Ensemble Aggregating Subgraphs via
Mutual Information Maximization [5.419711903307341]
We introduce a self-supervised learning method to enhance the representations of graph-level learned by Graph Neural Networks.
To get a comprehensive understanding of the graph structure, we propose an ensemble-learning like subgraph method.
And to achieve efficient and effective contrasive learning, a Head-Tail contrastive samples construction method is proposed.
arXiv Detail & Related papers (2021-03-24T12:06:12Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GraphCL: Contrastive Self-Supervised Learning of Graph Representations [20.439666392958284]
We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner.
We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them.
In both transductive and inductive learning setups, we demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks.
arXiv Detail & Related papers (2020-07-15T22:36:53Z) - Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement [55.2456981313287]
We propose a new disentanglement enhancement framework for deep generative models for attributed graphs.
A novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions.
Within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images.
arXiv Detail & Related papers (2020-06-09T16:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.