Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation
- URL: http://arxiv.org/abs/2201.07986v2
- Date: Sat, 22 Jan 2022 09:38:15 GMT
- Title: Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation
- Authors: Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, Guandong Xu
- Abstract summary: We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
- Score: 18.671374133506838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning is the state-of-the-art unsupervised graph
representation learning framework and has shown comparable performance with
supervised approaches. However, evaluating whether the graph contrastive
learning is robust to adversarial attacks is still an open problem because most
existing graph adversarial attacks are supervised models, which means they
heavily rely on labels and can only be used to evaluate the graph contrastive
learning in a specific scenario. For unsupervised graph representation methods
such as graph contrastive learning, it is difficult to acquire labels in
real-world scenarios, making traditional supervised graph attack methods
difficult to be applied to test their robustness. In this paper, we propose a
novel unsupervised gradient-based adversarial attack that does not rely on
labels for graph contrastive learning. We compute the gradients of the
adjacency matrices of the two views and flip the edges with gradient ascent to
maximize the contrastive loss. In this way, we can fully use multiple views
generated by the graph contrastive learning models and pick the most
informative edges without knowing their labels, and therefore can promisingly
support our model adapted to more kinds of downstream tasks. Extensive
experiments show that our attack outperforms unsupervised baseline attacks and
has comparable performance with supervised attacks in multiple downstream tasks
including node classification and link prediction. We further show that our
attack can be transferred to other graph representation models as well.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.