Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation
- URL: http://arxiv.org/abs/2201.07986v2
- Date: Sat, 22 Jan 2022 09:38:15 GMT
- Title: Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation
- Authors: Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, Guandong Xu
- Abstract summary: We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
- Score: 18.671374133506838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning is the state-of-the-art unsupervised graph
representation learning framework and has shown comparable performance with
supervised approaches. However, evaluating whether the graph contrastive
learning is robust to adversarial attacks is still an open problem because most
existing graph adversarial attacks are supervised models, which means they
heavily rely on labels and can only be used to evaluate the graph contrastive
learning in a specific scenario. For unsupervised graph representation methods
such as graph contrastive learning, it is difficult to acquire labels in
real-world scenarios, making traditional supervised graph attack methods
difficult to be applied to test their robustness. In this paper, we propose a
novel unsupervised gradient-based adversarial attack that does not rely on
labels for graph contrastive learning. We compute the gradients of the
adjacency matrices of the two views and flip the edges with gradient ascent to
maximize the contrastive loss. In this way, we can fully use multiple views
generated by the graph contrastive learning models and pick the most
informative edges without knowing their labels, and therefore can promisingly
support our model adapted to more kinds of downstream tasks. Extensive
experiments show that our attack outperforms unsupervised baseline attacks and
has comparable performance with supervised attacks in multiple downstream tasks
including node classification and link prediction. We further show that our
attack can be transferred to other graph representation models as well.
Related papers
- Uncovering Capabilities of Model Pruning in Graph Contrastive Learning [0.0]
We reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views.
We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning.
arXiv Detail & Related papers (2024-10-27T07:09:31Z) - GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling [23.450755275125577]
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks.
We present a novel approach named Graph Pooling ContraSt (GPS) to address these issues.
Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views.
arXiv Detail & Related papers (2024-01-29T10:00:53Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Certified Robustness of Graph Classification against Topology Attack
with Randomized Smoothing [22.16111584447466]
Graph-based machine learning models are vulnerable to adversarial perturbations due to the non i.i.d nature of graph data.
We build a smoothed graph classification model with certified robustness guarantee.
We also evaluate the effectiveness of our approach under graph convolutional network (GCN) based multi-class graph classification model.
arXiv Detail & Related papers (2020-09-12T22:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.