Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function
- URL: http://arxiv.org/abs/2009.00203v3
- Date: Sat, 16 Dec 2023 01:41:11 GMT
- Title: Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function
- Authors: Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang,
Hai Li, Yiran Chen
- Abstract summary: Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
- Score: 62.89388227354517
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph neural network (GNN), the mainstream method to learn on graph data, is
vulnerable to graph evasion attacks, where an attacker slightly perturbing the
graph structure can fool trained GNN models. Existing work has at least one of
the following drawbacks: 1) limited to directly attack two-layer GNNs; 2)
inefficient; and 3) impractical, as they need to know full or part of GNN model
parameters.
We address the above drawbacks and propose an influence-based
\emph{efficient, direct, and restricted black-box} evasion attack to
\emph{any-layer} GNNs. Specifically, we first introduce two influence
functions, i.e., feature-label influence and label influence, that are defined
on GNNs and label propagation (LP), respectively. Then we observe that GNNs and
LP are strongly connected in terms of our defined influences. Based on this, we
can then reformulate the evasion attack to GNNs as calculating label influence
on LP, which is \emph{inherently} applicable to any-layer GNNs, while no need
to know information about the internal GNN model. Finally, we propose an
efficient algorithm to calculate label influence. Experimental results on
various graph datasets show that, compared to state-of-the-art white-box
attacks, our attack can achieve comparable attack performance, but has a 5-50x
speedup when attacking two-layer GNNs. Moreover, our attack is effective to
attack multi-layer GNNs\footnote{Source code and full version is in the link:
\url{https://github.com/ventr1c/InfAttack}}.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.