Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive
Learning
- URL: http://arxiv.org/abs/2312.14222v2
- Date: Mon, 25 Dec 2023 07:10:25 GMT
- Title: Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive
Learning
- Authors: Jiangmeng Li, Yifan Jin, Hang Gao, Wenwen Qiang, Changwen Zheng,
Fuchun Sun
- Abstract summary: We propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning.
We empirically demonstrate that the proposed method is universal to multiple state-of-the-art GCL models.
Our method beats the state-of-the-art method by 0.23% on unsupervised representation learning setting.
- Score: 37.0788516033498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) aims to align the positive features while
differentiating the negative features in the latent space by minimizing a
pair-wise contrastive loss. As the embodiment of an outstanding discriminative
unsupervised graph representation learning approach, GCL achieves impressive
successes in various graph benchmarks. However, such an approach falls short of
recognizing the topology isomorphism of graphs, resulting in that graphs with
relatively homogeneous node features cannot be sufficiently discriminated. By
revisiting classic graph topology recognition works, we disclose that the
corresponding expertise intuitively complements GCL methods. To this end, we
propose a novel hierarchical topology isomorphism expertise embedded graph
contrastive learning, which introduces knowledge distillations to empower GCL
models to learn the hierarchical topology isomorphism expertise, including the
graph-tier and subgraph-tier. On top of this, the proposed method holds the
feature of plug-and-play, and we empirically demonstrate that the proposed
method is universal to multiple state-of-the-art GCL models. The solid
theoretical analyses are further provided to prove that compared with
conventional GCL methods, our method acquires the tighter upper bound of Bayes
classification error. We conduct extensive experiments on real-world benchmarks
to exhibit the performance superiority of our method over candidate GCL
methods, e.g., for the real-world graph representation learning experiments,
the proposed method beats the state-of-the-art method by 0.23% on unsupervised
representation learning setting, 0.43% on transfer learning setting. Our code
is available at https://github.com/jyf123/HTML.
Related papers
- Tensor-Fused Multi-View Graph Contrastive Learning [12.412040359604163]
Graph contrastive learning (GCL) has emerged as a promising approach to enhance graph neural networks' (GNNs) ability to learn rich representations from unlabeled graph-structured data.
Current GCL models face challenges with computational demands and limited feature utilization.
We propose TensorMV-GCL, a novel framework that integrates extended persistent homology with GCL representations and facilitates multi-scale feature extraction.
arXiv Detail & Related papers (2024-10-20T01:40:12Z) - Rethinking Graph Masked Autoencoders through Alignment and Uniformity [26.86368034133612]
Self-supervised learning on graphs can be bifurcated into contrastive and generative methods.
Recent advent of graph masked autoencoder (GraphMAE) rekindles momentum behind generative methods.
arXiv Detail & Related papers (2024-02-11T15:21:08Z) - Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning [34.566003077992384]
We present a systematic study of various graph contrastive learning (GCL) methods.
By uncovering how the implicit inductive bias of GNNs works in contrastive learning, we theoretically provide insights into the above intriguing properties of GCL.
Rather than directly porting existing NN methods to GCL, we advocate for more attention toward the unique architecture of graph learning.
arXiv Detail & Related papers (2023-11-05T15:54:17Z) - M2HGCL: Multi-Scale Meta-Path Integrated Heterogeneous Graph Contrastive
Learning [16.391439666603578]
We propose a new multi-scale meta-path integrated heterogeneous graph contrastive learning (M2HGCL) model.
Specifically, we expand the meta-paths and jointly aggregate the direct neighbor information, the initial meta-path neighbor information and the expanded meta-path neighbor information.
Through extensive experiments on three real-world datasets, we demonstrate that M2HGCL outperforms the current state-of-the-art baseline models.
arXiv Detail & Related papers (2023-09-03T06:39:56Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Revisiting Heterophily in Graph Convolution Networks by Learning
Representations Across Topological and Feature Spaces [20.775165967590173]
Graph convolution networks (GCNs) have been enormously successful in learning representations over several graph-based machine learning tasks.
We argue that by learning graph representations across two spaces i.e., topology and feature space GCNs can address heterophily.
We experimentally demonstrate the performance of the proposed GCN framework over semi-supervised node classification task.
arXiv Detail & Related papers (2022-11-01T16:21:10Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.