Dual-perspective Cross Contrastive Learning in Graph Transformers
- URL: http://arxiv.org/abs/2406.00403v1
- Date: Sat, 1 Jun 2024 11:11:49 GMT
- Title: Dual-perspective Cross Contrastive Learning in Graph Transformers
- Authors: Zelin Yao, Chuang Liu, Xueqi Ma, Mukun Chen, Jia Wu, Xiantao Cai, Bo Du, Wenbin Hu,
- Abstract summary: Graph contrastive learning (GCL) is a popular method for leaning graph representations.
This paper proposes a framework termed dual-perspective cross graph contrastive learning (DC-GCL)
DC-GCL incorporates three modifications designed to enhance positive sample diversity and reliability.
- Score: 33.18813968554711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) is a popular method for leaning graph representations by maximizing the consistency of features across augmented views. Traditional GCL methods utilize single-perspective i.e. data or model-perspective) augmentation to generate positive samples, restraining the diversity of positive samples. In addition, these positive samples may be unreliable due to uncontrollable augmentation strategies that potentially alter the semantic information. To address these challenges, this paper proposed a innovative framework termed dual-perspective cross graph contrastive learning (DC-GCL), which incorporates three modifications designed to enhance positive sample diversity and reliability: 1) We propose dual-perspective augmentation strategy that provide the model with more diverse training data, enabling the model effective learning of feature consistency across different views. 2) From the data perspective, we slightly perturb the original graphs using controllable data augmentation, effectively preserving their semantic information. 3) From the model perspective, we enhance the encoder by utilizing more powerful graph transformers instead of graph neural networks. Based on the model's architecture, we propose three pruning-based strategies to slightly perturb the encoder, providing more reliable positive samples. These modifications collectively form the DC-GCL's foundation and provide more diverse and reliable training inputs, offering significant improvements over traditional GCL methods. Extensive experiments on various benchmarks demonstrate that DC-GCL consistently outperforms different baselines on various datasets and tasks.
Related papers
- Adversarial Curriculum Graph Contrastive Learning with Pair-wise
Augmentation [35.875976206333185]
ACGCL capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity.
Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology.
A comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets.
arXiv Detail & Related papers (2024-02-16T06:17:50Z) - Unveiling Backbone Effects in CLIP: Exploring Representational Synergies
and Variances [49.631908848868505]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
We investigate the differences in CLIP performance among various neural architectures.
We propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34%.
arXiv Detail & Related papers (2023-12-22T03:01:41Z) - Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation [56.10351068286499]
We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
arXiv Detail & Related papers (2023-02-05T06:55:51Z) - MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning [41.963242524220654]
We present three easy-to-implement model augmentation tricks for graph contrastive learning (GCL)
Specifically, we present three easy-to-implement model augmentation tricks for GCL, namely asymmetric, random and shuffling.
Experimental results show that MA-GCL can achieve state-of-the-art performance on node classification benchmarks.
arXiv Detail & Related papers (2022-12-14T05:04:10Z) - Attribute Graph Clustering via Learnable Augmentation [71.36827095487294]
Contrastive deep graph clustering (CDGC) utilizes contrastive learning to group nodes into different clusters.
We propose an Attribute Graph Clustering method via Learnable Augmentation (textbfAGCLA), which introduces learnable augmentors for high-quality augmented samples.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Augmentations in Hypergraph Contrastive Learning: Fabricated and
Generative [126.0985540285981]
We apply the contrastive learning approach from images/graphs (we refer to it as HyperGCL) to improve generalizability of hypergraph neural networks.
We fabricate two schemes to augment hyperedges with higher-order relations encoded, and adopt three augmentation strategies from graph-structured data.
We propose a hypergraph generative model to generate augmented views, and then an end-to-end differentiable pipeline to jointly learn hypergraph augmentations and model parameters.
arXiv Detail & Related papers (2022-10-07T20:12:20Z) - Heterogeneous Graph Contrastive Multi-view Learning [11.489983916543805]
Graph contrastive learning (GCL) has been developed to learn discriminative node representations on graph datasets.
We propose a novel Heterogeneous Graph Contrastive Multi-view Learning (HGCML) model.
HGCML consistently outperforms state-of-the-art baselines on five real-world benchmark datasets.
arXiv Detail & Related papers (2022-10-01T10:53:48Z) - Adversarial Cross-View Disentangled Graph Contrastive Learning [30.97720522293301]
We introduce ACDGCL, which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data.
We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.
arXiv Detail & Related papers (2022-09-16T03:48:39Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.