GraphAttacker: A General Multi-Task GraphAttack Framework
- URL: http://arxiv.org/abs/2101.06855v1
- Date: Mon, 18 Jan 2021 03:06:41 GMT
- Title: GraphAttacker: A General Multi-Task GraphAttack Framework
- Authors: Jinyin Chen, Dunjie Zhang, Zhaoyan Ming and Kejie Huang
- Abstract summary: Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
- Score: 4.218118583619758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been successfully exploited in graph
analysis tasks in many real-world applications. However, GNNs have been shown
to have potential security issues imposed by adversarial samples generated by
attackers, which achieved great attack performance with almost imperceptible
perturbations. What limit the wide application of these attackers are their
methods' specificity on a certain graph analysis task, such as node
classification or link prediction. We thus propose GraphAttacker, a novel
generic graph attack framework that can flexibly adjust the structures and the
attack strategies according to the graph analysis tasks. Based on the
Generative Adversarial Network (GAN), GraphAttacker generates adversarial
samples through alternate training on three key components, the Multi-strategy
Attack Generator (MAG), the Similarity Discriminator (SD), and the Attack
Discriminator(AD). Furthermore, to achieve attackers within perturbation
budget, we propose a novel Similarity Modification Rate (SMR) to quantify the
similarity between nodes thus constrain the attack budget. We carry out
extensive experiments and the results show that GraphAttacker can achieve
state-of-the-art attack performance on graph analysis tasks of node
classification, graph classification, and link prediction. Besides, we also
analyze the unique characteristics of each task and their specific response in
the unified attack framework. We will release GraphAttacker as an open-source
simulation platform for future attack researches.
Related papers
- Attacks on Node Attributes in Graph Neural Networks [32.40598187698689]
This research investigates the vulnerability of graph models through the application of feature based adversarial attacks.
Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies.
arXiv Detail & Related papers (2024-02-19T17:52:29Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - EDoG: Adversarial Edge Detection For Graph Neural Networks [17.969573886307906]
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks.
Recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations.
We propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
arXiv Detail & Related papers (2022-12-27T20:42:36Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Revisiting Adversarial Attacks on Graph Neural Networks for Graph
Classification [38.339503144719984]
We present a novel and general framework to generate adversarial examples via manipulating graph structure and node features.
Specifically, we make use of Graph Class Mapping and its variant to produce node-level importance corresponding to the graph classification task.
Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
arXiv Detail & Related papers (2022-08-13T13:41:44Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.