Simple and Efficient Partial Graph Adversarial Attack: A New Perspective
- URL: http://arxiv.org/abs/2308.07834v1
- Date: Tue, 15 Aug 2023 15:23:36 GMT
- Title: Simple and Efficient Partial Graph Adversarial Attack: A New Perspective
- Authors: Guanghui Zhu, Mengyu Chen, Chunfeng Yuan, and Yihua Huang
- Abstract summary: Existing global attack methods treat all nodes in the graph as their attack targets.
We propose a totally new method named partial graph attack (PGA), which selects the vulnerable nodes as attack targets.
PGA can achieve significant improvements in both attack effect and attack efficiency compared to other existing graph global attack methods.
- Score: 16.083311332179633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the study of graph neural networks becomes more intensive and
comprehensive, their robustness and security have received great research
interest. The existing global attack methods treat all nodes in the graph as
their attack targets. Although existing methods have achieved excellent
results, there is still considerable space for improvement. The key problem is
that the current approaches rigidly follow the definition of global attacks.
They ignore an important issue, i.e., different nodes have different robustness
and are not equally resilient to attacks. From a global attacker's view, we
should arrange the attack budget wisely, rather than wasting them on highly
robust nodes. To this end, we propose a totally new method named partial graph
attack (PGA), which selects the vulnerable nodes as attack targets. First, to
select the vulnerable items, we propose a hierarchical target selection policy,
which allows attackers to only focus on easy-to-attack nodes. Then, we propose
a cost-effective anchor-picking policy to pick the most promising anchors for
adding or removing edges, and a more aggressive iterative greedy-based attack
method to perform more efficient attacks. Extensive experimental results
demonstrate that PGA can achieve significant improvements in both attack effect
and attack efficiency compared to other existing graph global attack methods.
Related papers
- AGSOA:Graph Neural Network Targeted Attack Based on Average Gradient and Structure Optimization [16.681157857248436]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack that cause performance degradation by adding small perturbations to the graph.
This paper proposes an attack on GNNs, called AGSOA, which consists of an average gradient calculation and a structre optimization module.
arXiv Detail & Related papers (2024-06-19T05:29:20Z) - Minimum Topology Attacks for Graph Neural Networks [70.17791814425148]
Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
arXiv Detail & Related papers (2024-03-05T07:29:12Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - GUAP: Graph Universal Attack Through Adversarial Patching [12.484396767037925]
Graph neural networks (GNNs) are a class of effective deep learning models for node classification tasks.
In this work, we consider an easier attack harder to be noticed, through adversarially patching the graph with new nodes and edges.
We develop an algorithm, named GUAP, that meanwhile achieves a high attack success rate but preserves the prediction accuracy.
arXiv Detail & Related papers (2023-01-04T18:02:29Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Scalable Attack on Graph Data by Injecting Vicious Nodes [44.56647129718062]
Graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
We develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario.
Our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.
arXiv Detail & Related papers (2020-04-22T02:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.