SimGRACE: A Simple Framework for Graph Contrastive Learning without Data
Augmentation
- URL: http://arxiv.org/abs/2202.03104v3
- Date: Mon, 20 Mar 2023 14:51:38 GMT
- Title: SimGRACE: A Simple Framework for Graph Contrastive Learning without Data
Augmentation
- Authors: Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, Stan Z.Li
- Abstract summary: We propose underlineSimple framework for underlineGRAph underlineContrastive lunderlineEarning, textbfSimGRACE for brevity.
We take original graph as input and GNN model with perturbed version as two encoders to obtain two correlated views for contrast.
SimGRACE can yield competitive or better performance compared with state-of-the-art methods in terms of generalizability, transferability and robustness.
- Score: 33.748691759568004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) has emerged as a dominant technique for
graph representation learning which maximizes the mutual information between
paired graph augmentations that share the same semantics. Unfortunately, it is
difficult to preserve semantics well during augmentations in view of the
diverse nature of graph data. Currently, data augmentations in GCL that are
designed to preserve semantics broadly fall into three unsatisfactory ways.
First, the augmentations can be manually picked per dataset by
trial-and-errors. Second, the augmentations can be selected via cumbersome
search. Third, the augmentations can be obtained by introducing expensive
domain-specific knowledge as guidance. All of these limit the efficiency and
more general applicability of existing GCL methods. To circumvent these crucial
issues, we propose a \underline{Sim}ple framework for \underline{GRA}ph
\underline{C}ontrastive l\underline{E}arning, \textbf{SimGRACE} for brevity,
which does not require data augmentations. Specifically, we take original graph
as input and GNN model with its perturbed version as two encoders to obtain two
correlated views for contrast. SimGRACE is inspired by the observation that
graph data can preserve their semantics well during encoder perturbations while
not requiring manual trial-and-errors, cumbersome search or expensive domain
knowledge for augmentations selection. Also, we explain why SimGRACE can
succeed. Furthermore, we devise adversarial training scheme, dubbed
\textbf{AT-SimGRACE}, to enhance the robustness of graph contrastive learning
and theoretically explain the reasons. Albeit simple, we show that SimGRACE can
yield competitive or better performance compared with state-of-the-art methods
in terms of generalizability, transferability and robustness, while enjoying
unprecedented degree of flexibility and efficiency.
Related papers
- Graph Contrastive Learning with Cohesive Subgraph Awareness [34.76555185419192]
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs.
We argue that an awareness of subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance.
We propose a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms.
arXiv Detail & Related papers (2024-01-31T03:51:30Z) - Capturing Fine-grained Semantics in Contrastive Graph Representation
Learning [23.861016307326146]
Graph contrastive learning defines a contrastive task to pull similar instances close and push dissimilar instances away.
Existing methods of graph contrastive learning ignore the differences between diverse semantics existed in graphs.
We propose a novel Fine-grained Semantics enhanced Graph Contrastive Learning (FSGCL) in this paper.
arXiv Detail & Related papers (2023-04-23T14:05:05Z) - Hybrid Augmented Automated Graph Contrastive Learning [3.785553471764994]
We propose a framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL)
HAGCL consists of a feature-level learnable view generator and an edge-level learnable view generator.
It insures to learn the most semantically meaningful structure in terms of features and topology.
arXiv Detail & Related papers (2023-03-24T03:26:20Z) - Feature propagation as self-supervision signals on graphs [0.0]
Regularized Graph Infomax (RGI) is a simple yet effective framework for node level self-supervised learning.
We show that RGI can achieve state-of-the-art performance regardless of its simplicity.
arXiv Detail & Related papers (2023-03-15T14:20:06Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive
Learning [64.78221638149276]
We show that the node embedding obtained via the graph augmentations is highly biased.
Instead of investigating graph augmentation in the input space, we propose augmentations on the hidden features.
We show that the feature augmentation with COSTA achieves comparable/better results than graph augmentation based models.
arXiv Detail & Related papers (2022-06-09T18:46:38Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Dual Space Graph Contrastive Learning [82.81372024482202]
We propose a novel graph contrastive learning method, namely textbfDual textbfSpace textbfGraph textbfContrastive (DSGC) Learning.
Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides.
arXiv Detail & Related papers (2022-01-19T04:10:29Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.