Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks
- URL: http://arxiv.org/abs/2002.08012v1
- Date: Wed, 19 Feb 2020 05:44:09 GMT
- Title: Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks
- Authors: Tsubasa Takahashi
- Abstract summary: Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors.
We generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target.
Our proposed method shows 99% attack success rate within two-hops from the target in two datasets.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional neural networks, which learn aggregations over neighbor
nodes, have achieved great performance in node classification tasks. However,
recent studies reported that such graph convolutional node classifier can be
deceived by adversarial perturbations on graphs. Abusing graph convolutions, a
node's classification result can be influenced by poisoning its neighbors.
Given an attributed graph and a node classifier, how can we evaluate robustness
against such indirect adversarial attacks? Can we generate strong adversarial
perturbations which are effective on not only one-hop neighbors, but more far
from the target? In this paper, we demonstrate that the node classifier can be
deceived with high-confidence by poisoning just a single node even two-hops or
more far from the target. Towards achieving the attack, we propose a new
approach which searches smaller perturbations on just a single node far from
the target. In our experiments, our proposed method shows 99% attack success
rate within two-hops from the target in two datasets. We also demonstrate that
m-layer graph convolutional neural networks have chance to be deceived by our
indirect attack within m-hop neighbors. The proposed attack can be used as a
benchmark in future defense attempts to develop graph convolutional neural
networks with having adversary robustness.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - GUAP: Graph Universal Attack Through Adversarial Patching [12.484396767037925]
Graph neural networks (GNNs) are a class of effective deep learning models for node classification tasks.
In this work, we consider an easier attack harder to be noticed, through adversarially patching the graph with new nodes and edges.
We develop an algorithm, named GUAP, that meanwhile achieves a high attack success rate but preserves the prediction accuracy.
arXiv Detail & Related papers (2023-01-04T18:02:29Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - Query-based Adversarial Attacks on Graph with Fake Nodes [32.67989796394633]
We propose a novel adversarial attack by introducing a set of fake nodes to the original graph.
Specifically, we query the victim model for each victim node to acquire their most adversarial feature.
Our attack is performed in a practical and unnoticeable manner.
arXiv Detail & Related papers (2021-09-27T14:19:17Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.