Similarity Preserving Adversarial Graph Contrastive Learning
- URL: http://arxiv.org/abs/2306.13854v1
- Date: Sat, 24 Jun 2023 04:02:50 GMT
- Title: Similarity Preserving Adversarial Graph Contrastive Learning
- Authors: Yeonjun In, Kanghoon Yoon, Chanyoung Park
- Abstract summary: We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
- Score: 5.671825576834061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works demonstrate that GNN models are vulnerable to adversarial
attacks, which refer to imperceptible perturbation on the graph structure and
node features. Among various GNN models, graph contrastive learning (GCL) based
methods specifically suffer from adversarial attacks due to their inherent
design that highly depends on the self-supervision signals derived from the
original graph, which however already contains noise when the graph is
attacked. To achieve adversarial robustness against such attacks, existing
methods adopt adversarial training (AT) to the GCL framework, which considers
the attacked graph as an augmentation under the GCL framework. However, we find
that existing adversarially trained GCL methods achieve robustness at the
expense of not being able to preserve the node feature similarity. In this
paper, we propose a similarity-preserving adversarial graph contrastive
learning (SP-AGCL) framework that contrasts the clean graph with two auxiliary
views of different properties (i.e., the node similarity-preserving view and
the adversarial view). Extensive experiments demonstrate that SP-AGCL achieves
a competitive performance on several downstream tasks, and shows its
effectiveness in various scenarios, e.g., a network with adversarial attacks,
noisy labels, and heterophilous neighbors. Our code is available at
https://github.com/yeonjun-in/torch-SP-AGCL.
Related papers
- Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets [3.325501850627077]
Graph Lottery Tickets (GLTs) can significantly reduce the inference latency and compute footprint compared to their dense counterparts.
Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored.
We present an adversarially robust graph sparsification framework that prunes the adjacency matrix and the GNN weights.
arXiv Detail & Related papers (2023-12-11T17:52:46Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - On the Adversarial Robustness of Graph Contrastive Learning Methods [9.675856264585278]
We introduce a comprehensive evaluation robustness protocol tailored to assess the robustness of graph contrastive learning (GCL) models.
We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario.
With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
arXiv Detail & Related papers (2023-11-29T17:59:18Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - EGC2: Enhanced Graph Classification with Easy Graph Compression [3.599345724913102]
We propose EGC$2$, an enhanced graph classification model with easy graph compression.
EGC$2$ captures the relationship between features of different nodes by constructing feature graphs and improving aggregate node-level representation.
Experiments on seven benchmark datasets demonstrate that the proposed feature read-out and graph compression mechanisms enhance the robustness of various basic models.
arXiv Detail & Related papers (2021-07-16T07:17:29Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.