Homophily-Driven Sanitation View for Robust Graph Contrastive Learning
- URL: http://arxiv.org/abs/2307.12555v1
- Date: Mon, 24 Jul 2023 06:41:59 GMT
- Title: Homophily-Driven Sanitation View for Robust Graph Contrastive Learning
- Authors: Yulin Zhu, Xing Ai, Yevgeniy Vorobeychik, Kai Zhou
- Abstract summary: We investigate adversarial robustness of unsupervised Graph Contrastive Learning (GCL) against structural attacks.
We present a robust GCL framework that integrates a homophily-driven sanitation view, which can be learned jointly with contrastive learning.
We conduct extensive experiments to evaluate the performance of our proposed model, GCHS, against two state of the art structural attacks on GCL.
- Score: 28.978770069310276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate adversarial robustness of unsupervised Graph Contrastive
Learning (GCL) against structural attacks. First, we provide a comprehensive
empirical and theoretical analysis of existing attacks, revealing how and why
they downgrade the performance of GCL. Inspired by our analytic results, we
present a robust GCL framework that integrates a homophily-driven sanitation
view, which can be learned jointly with contrastive learning. A key challenge
this poses, however, is the non-differentiable nature of the sanitation
objective. To address this challenge, we propose a series of techniques to
enable gradient-based end-to-end robust GCL. Moreover, we develop a fully
unsupervised hyperparameter tuning method which, unlike prior approaches, does
not require knowledge of node labels. We conduct extensive experiments to
evaluate the performance of our proposed model, GCHS (Graph Contrastive
Learning with Homophily-driven Sanitation View), against two state of the art
structural attacks on GCL. Our results demonstrate that GCHS consistently
outperforms all state of the art baselines in terms of the quality of generated
node embeddings as well as performance on two important downstream tasks.
Related papers
- Towards Robust Recommendation via Decision Boundary-aware Graph Contrastive Learning [25.514007761856632]
graph contrastive learning (GCL) has received increasing attention in recommender systems due to its effectiveness in reducing bias caused by data sparsity.
We argue that these methods struggle to balance between semantic invariance and view hardness across the dynamic training process.
We propose a novel GCL-based recommendation framework RGCL, which effectively maintains the semantic invariance of contrastive pairs and dynamically adapts as the model capability evolves.
arXiv Detail & Related papers (2024-07-14T13:03:35Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Adversarial Curriculum Graph Contrastive Learning with Pair-wise
Augmentation [35.875976206333185]
ACGCL capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity.
Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology.
A comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets.
arXiv Detail & Related papers (2024-02-16T06:17:50Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - On the Adversarial Robustness of Graph Contrastive Learning Methods [9.675856264585278]
We introduce a comprehensive evaluation robustness protocol tailored to assess the robustness of graph contrastive learning (GCL) models.
We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario.
With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
arXiv Detail & Related papers (2023-11-29T17:59:18Z) - Certifiably Robust Graph Contrastive Learning [43.029361784095016]
We develop the first certifiably robust framework in Graph Contrastive Learning (GCL)
We first propose a unified criteria to evaluate and certify the robustness of GCL.
We then introduce a novel technique, RES (Randomized Edgedrop Smoothing), to ensure certifiable robustness for any GCL model.
arXiv Detail & Related papers (2023-10-05T05:00:11Z) - Similarity Preserving Adversarial Graph Contrastive Learning [5.671825576834061]
We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
arXiv Detail & Related papers (2023-06-24T04:02:50Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.