Graph Contrastive Learning with Adaptive Augmentation
- URL: http://arxiv.org/abs/2010.14945v3
- Date: Fri, 26 Feb 2021 15:12:23 GMT
- Title: Graph Contrastive Learning with Adaptive Augmentation
- Authors: Yanqiao Zhu and Yichen Xu and Feng Yu and Qiang Liu and Shu Wu and
Liang Wang
- Abstract summary: We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
- Score: 23.37786673825192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, contrastive learning (CL) has emerged as a successful method for
unsupervised graph representation learning. Most graph CL methods first perform
stochastic augmentation on the input graph to obtain two graph views and
maximize the agreement of representations in the two views. Despite the
prosperous development of graph CL methods, the design of graph augmentation
schemes -- a crucial component in CL -- remains rarely explored. We argue that
the data augmentation schemes should preserve intrinsic structures and
attributes of graphs, which will force the model to learn representations that
are insensitive to perturbation on unimportant nodes and edges. However, most
existing methods adopt uniform data augmentation schemes, like uniformly
dropping edges and uniformly shuffling features, leading to suboptimal
performance. In this paper, we propose a novel graph contrastive representation
learning method with adaptive augmentation that incorporates various priors for
topological and semantic aspects of the graph. Specifically, on the topology
level, we design augmentation schemes based on node centrality measures to
highlight important connective structures. On the node attribute level, we
corrupt node features by adding more noise to unimportant node features, to
enforce the model to recognize underlying semantic information. We perform
extensive experiments of node classification on a variety of real-world
datasets. Experimental results demonstrate that our proposed method
consistently outperforms existing state-of-the-art baselines and even surpasses
some supervised counterparts, which validates the effectiveness of the proposed
contrastive framework with adaptive augmentation.
Related papers
- Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - Mitigating Semantic Confusion from Hostile Neighborhood for Graph Active
Learning [38.5372139056485]
Graph Active Learning (GAL) aims to find the most informative nodes in graphs for annotation to maximize the Graph Neural Networks (GNNs) performance.
Gal strategies may introduce semantic confusion to the selected training set, particularly when graphs are noisy.
We present Semantic-aware Active learning framework for Graphs (SAG) to mitigate the semantic confusion problem.
arXiv Detail & Related papers (2023-08-17T07:06:54Z) - ENGAGE: Explanation Guided Data Augmentation for Graph Representation
Learning [34.23920789327245]
We propose ENGAGE, where explanation guides the contrastive augmentation process to preserve the key parts in graphs.
We also design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
arXiv Detail & Related papers (2023-07-03T14:33:14Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Hierarchical Adaptive Pooling by Capturing High-order Dependency for
Graph Representation Learning [18.423192209359158]
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
This paper proposes a hierarchical graph-level representation learning framework, which is adaptively sensitive to graph structures.
arXiv Detail & Related papers (2021-04-13T06:22:24Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.