Hybrid Augmented Automated Graph Contrastive Learning
- URL: http://arxiv.org/abs/2303.15182v1
- Date: Fri, 24 Mar 2023 03:26:20 GMT
- Title: Hybrid Augmented Automated Graph Contrastive Learning
- Authors: Yifu Chen and Qianqian Ren and Liu Yong
- Abstract summary: We propose a framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL)
HAGCL consists of a feature-level learnable view generator and an edge-level learnable view generator.
It insures to learn the most semantically meaningful structure in terms of features and topology.
- Score: 3.785553471764994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph augmentations are essential for graph contrastive learning. Most
existing works use pre-defined random augmentations, which are usually unable
to adapt to different input graphs and fail to consider the impact of different
nodes and edges on graph semantics. To address this issue, we propose a
framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL).
HAGCL consists of a feature-level learnable view generator and an edge-level
learnable view generator. The view generators are end-to-end differentiable to
learn the probability distribution of views conditioned on the input graph. It
insures to learn the most semantically meaningful structure in terms of
features and topology, respectively. Furthermore, we propose an improved joint
training strategy, which can achieve better results than previous works without
resorting to any weak label information in the downstream tasks and extensive
evaluation of additional work.
Related papers
- Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - Graph Contrastive Learning with Personalized Augmentation [17.714437631216516]
Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
arXiv Detail & Related papers (2022-09-14T11:37:48Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Model-Agnostic Augmentation for Accurate Graph Classification [19.824105919844495]
Graph augmentation is an essential strategy to improve the performance of graph-based tasks.
In this work, we introduce five desired properties for effective augmentation.
Our experiments on social networks and molecular graphs show that NodeSam and SubMix outperform existing approaches in graph classification.
arXiv Detail & Related papers (2022-02-21T10:37:53Z) - Augmentation-Free Self-Supervised Learning on Graphs [7.146027549101716]
We propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL.
Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph.
arXiv Detail & Related papers (2021-12-05T04:20:44Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - AutoGCL: Automated Graph Contrastive Learning via Learnable View
Generators [22.59182542071303]
We propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper.
AutoGCL employs a set of learnable graph view generators orchestrated by an auto augmentation strategy.
Experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate the superiority of our framework over the state-of-the-arts in graph contrastive learning.
arXiv Detail & Related papers (2021-09-21T15:34:11Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.