Robust Graph Contrastive Learning with Information Restoration
- URL: http://arxiv.org/abs/2307.12555v3
- Date: Fri, 22 Aug 2025 07:13:15 GMT
- Title: Robust Graph Contrastive Learning with Information Restoration
- Authors: Yulin Zhu, Xing Ai, Yevgeniy Vorobeychik, Kai Zhou,
- Abstract summary: We investigate the detrimental effects of graph structural attacks against the graph contrastive learning (GCL) framework.<n>Motivated by this theoretical insight, we propose a robust graph contrastive learning framework with a learnable sanitation view.
- Score: 32.990253155612386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The graph contrastive learning (GCL) framework has gained remarkable achievements in graph representation learning. However, similar to graph neural networks (GNNs), GCL models are susceptible to graph structural attacks. As an unsupervised method, GCL faces greater challenges in defending against adversarial attacks. Furthermore, there has been limited research on enhancing the robustness of GCL. To thoroughly explore the failure of GCL on the poisoned graphs, we investigate the detrimental effects of graph structural attacks against the GCL framework. We discover that, in addition to the conventional observation that graph structural attacks tend to connect dissimilar node pairs, these attacks also diminish the mutual information between the graph and its representations from an information-theoretical perspective, which is the cornerstone of the high-quality node embeddings for GCL. Motivated by this theoretical insight, we propose a robust graph contrastive learning framework with a learnable sanitation view that endeavors to sanitize the augmented graphs by restoring the diminished mutual information caused by the structural attacks. Additionally, we design a fully unsupervised tuning strategy to tune the hyperparameters without accessing the label information, which strictly coincides with the defender's knowledge. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method compared to competitive baselines.
Related papers
- Dual-Kernel Graph Community Contrastive Learning [14.92920991249099]
Graph Contrastive Learning (GCL) has emerged as a powerful paradigm for training Graph Neural Networks (GNNs)<n>We propose an efficient GCL framework that transforms the input graph into a compact network of interconnected node sets.<n>Our method outperforms state-of-the-art GCL baselines in both effectiveness and scalability.
arXiv Detail & Related papers (2025-11-11T14:20:39Z) - Robust Graph Condensation via Classification Complexity Mitigation [61.22258715077984]
Graph condensation is an intrinsic-dimension-reducing process, synthesizing a condensed graph with lower classification complexity.<n>We introduce three graph data manifold learning modules that guide the condensed graph to lie within a smooth, low-dimensional manifold.<n>Experiments demonstrate the robustness of ModelName across diverse attack scenarios.
arXiv Detail & Related papers (2025-10-30T12:55:21Z) - Khan-GCL: Kolmogorov-Arnold Network Based Graph Contrastive Learning with Hard Negatives [3.440313042843115]
Khan-GCL is a novel framework that integrates the Kolmogorov-Arnold Network (KAN) into the GCL encoder architecture.<n>We exploit the rich information embedded within KAN coefficient parameters to develop two novel critical feature identification techniques.<n>These techniques enable the generation of semantically meaningful hard negative samples for each graph representation.
arXiv Detail & Related papers (2025-05-21T04:54:18Z) - Squeeze and Excitation: A Weighted Graph Contrastive Learning for Collaborative Filtering [1.3535213052193866]
Graph contrastive learning (GCL) aims to enhance the robustness of representation learning.<n>Weighted Graph Contrastive Learning framework (WeightedGCL) addresses the irrational allocation of feature attention.<n>WeightedGCL achieves significant accuracy improvements compared to competitive baselines.
arXiv Detail & Related papers (2025-04-06T11:30:59Z) - Graph Structure Refinement with Energy-based Contrastive Learning [56.957793274727514]
We introduce an unsupervised method based on a joint of generative training and discriminative training to learn graph structure and representation.
We propose an Energy-based Contrastive Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as ECL-GSR.
ECL-GSR achieves faster training with fewer samples and memories against the leading baseline, highlighting its simplicity and efficiency in downstream tasks.
arXiv Detail & Related papers (2024-12-20T04:05:09Z) - Towards Robust Recommendation via Decision Boundary-aware Graph Contrastive Learning [25.514007761856632]
graph contrastive learning (GCL) has received increasing attention in recommender systems due to its effectiveness in reducing bias caused by data sparsity.
We argue that these methods struggle to balance between semantic invariance and view hardness across the dynamic training process.
We propose a novel GCL-based recommendation framework RGCL, which effectively maintains the semantic invariance of contrastive pairs and dynamically adapts as the model capability evolves.
arXiv Detail & Related papers (2024-07-14T13:03:35Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Adversarial Curriculum Graph Contrastive Learning with Pair-wise
Augmentation [35.875976206333185]
ACGCL capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity.
Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology.
A comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets.
arXiv Detail & Related papers (2024-02-16T06:17:50Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - On the Adversarial Robustness of Graph Contrastive Learning Methods [9.675856264585278]
We introduce a comprehensive evaluation robustness protocol tailored to assess the robustness of graph contrastive learning (GCL) models.
We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario.
With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
arXiv Detail & Related papers (2023-11-29T17:59:18Z) - Certifiably Robust Graph Contrastive Learning [43.029361784095016]
We develop the first certifiably robust framework in Graph Contrastive Learning (GCL)
We first propose a unified criteria to evaluate and certify the robustness of GCL.
We then introduce a novel technique, RES (Randomized Edgedrop Smoothing), to ensure certifiable robustness for any GCL model.
arXiv Detail & Related papers (2023-10-05T05:00:11Z) - Similarity Preserving Adversarial Graph Contrastive Learning [5.671825576834061]
We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
arXiv Detail & Related papers (2023-06-24T04:02:50Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.