Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming
- URL: http://arxiv.org/abs/2111.10698v1
- Date: Sat, 20 Nov 2021 22:45:53 GMT
- Title: Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming
- Authors: Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li,
Zhao Li
- Abstract summary: We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
- Score: 48.99614465020678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning (GRL) is critical for graph-structured data
analysis. However, most of the existing graph neural networks (GNNs) heavily
rely on labeling information, which is normally expensive to obtain in the real
world. Existing unsupervised GRL methods suffer from certain limitations, such
as the heavy reliance on monotone contrastiveness and limited scalability. To
overcome the aforementioned problems, in light of the recent advancements in
graph contrastive learning, we introduce a novel self-supervised graph
representation learning algorithm via Graph Contrastive Adjusted Zooming,
namely G-Zoom, to learn node representations by leveraging the proposed
adjusted zooming scheme. Specifically, this mechanism enables G-Zoom to explore
and extract self-supervision signals from a graph from multiple scales: micro
(i.e., node-level), meso (i.e., neighbourhood-level), and macro (i.e.,
subgraph-level). Firstly, we generate two augmented views of the input graph
via two different graph augmentations. Then, we establish three different
contrastiveness on the above three scales progressively, from node,
neighbouring, to subgraph level, where we maximize the agreement between graph
representations across scales. While we can extract valuable clues from a given
graph on the micro and macro perspectives, the neighbourhood-level
contrastiveness offers G-Zoom the capability of a customizable option based on
our adjusted zooming scheme to manually choose an optimal viewpoint that lies
between the micro and macro perspectives to better understand the graph data.
Additionally, to make our model scalable to large graphs, we employ a parallel
graph diffusion approach to decouple model training from the graph size. We
have conducted extensive experiments on real-world datasets, and the results
demonstrate that our proposed model outperforms state-of-the-art methods
consistently.
Related papers
- InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.