Generative Subgraph Contrast for Self-Supervised Graph Representation
Learning
- URL: http://arxiv.org/abs/2207.11996v2
- Date: Tue, 26 Jul 2022 08:49:16 GMT
- Title: Generative Subgraph Contrast for Self-Supervised Graph Representation
Learning
- Authors: Yuehui Han, Le Hui, Haobo Jiang, Jianjun Qian, Jin Xie
- Abstract summary: We propose a novel adaptive subgraph generation based contrastive learning framework for efficient and robust self-supervised graph representation learning.
It aims to generate contrastive samples by capturing the intrinsic structures of the graph and distinguish the samples based on the features and structures of subgraphs simultaneously.
- Score: 16.374143635724327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has shown great promise in the field of graph
representation learning. By manually constructing positive/negative samples,
most graph contrastive learning methods rely on the vector inner product based
similarity metric to distinguish the samples for graph representation. However,
the handcrafted sample construction (e.g., the perturbation on the nodes or
edges of the graph) may not effectively capture the intrinsic local structures
of the graph. Also, the vector inner product based similarity metric cannot
fully exploit the local structures of the graph to characterize the graph
difference well. To this end, in this paper, we propose a novel adaptive
subgraph generation based contrastive learning framework for efficient and
robust self-supervised graph representation learning, and the optimal transport
distance is utilized as the similarity metric between the subgraphs. It aims to
generate contrastive samples by capturing the intrinsic structures of the graph
and distinguish the samples based on the features and structures of subgraphs
simultaneously. Specifically, for each center node, by adaptively learning
relation weights to the nodes of the corresponding neighborhood, we first
develop a network to generate the interpolated subgraph. We then construct the
positive and negative pairs of subgraphs from the same and different nodes,
respectively. Finally, we employ two types of optimal transport distances
(i.e., Wasserstein distance and Gromov-Wasserstein distance) to construct the
structured contrastive loss. Extensive node classification experiments on
benchmark datasets verify the effectiveness of our graph contrastive learning
method.
Related papers
- Multi-Scale Subgraph Contrastive Learning [9.972544118719572]
We propose a multi-scale subgraph contrastive learning architecture which is able to characterize the fine-grained semantic information.
Specifically, we generate global and local views at different scales based on subgraph sampling, and construct multiple contrastive relationships according to their semantic associations.
arXiv Detail & Related papers (2024-03-05T07:17:18Z) - Entropy Neural Estimation for Graph Contrastive Learning [9.032721248598088]
Contrastive learning on graphs aims at extracting distinguishable high-level representations of nodes.
We propose a simple yet effective subset sampling strategy to contrast pairwise representations between views of a dataset.
We conduct extensive experiments on seven graph benchmarks, and the proposed approach achieves competitive performance.
arXiv Detail & Related papers (2023-07-26T03:55:08Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - COLOGNE: Coordinated Local Graph Neighborhood Sampling [1.6498361958317633]
replacing discrete unordered objects such as graph nodes by real-valued vectors is at the heart of many approaches to learning from graph data.
We address the problem of learning discrete node embeddings such that the coordinates of the node vector representations are graph nodes.
This opens the door to designing interpretable machine learning algorithms for graphs as all attributes originally present in the nodes are preserved.
arXiv Detail & Related papers (2021-02-09T11:39:06Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.