Adversarial Attacks on Graph Classification via Bayesian Optimisation
- URL: http://arxiv.org/abs/2111.02842v1
- Date: Thu, 4 Nov 2021 13:01:20 GMT
- Title: Adversarial Attacks on Graph Classification via Bayesian Optimisation
- Authors: Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne,
Xiaowen Dong
- Abstract summary: We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
- Score: 25.781404695921122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks, a popular class of models effective in a wide range of
graph-based learning tasks, have been shown to be vulnerable to adversarial
attacks. While the majority of the literature focuses on such vulnerability in
node-level classification tasks, little effort has been dedicated to analysing
adversarial attacks on graph-level classification, an important problem with
numerous real-life applications such as biochemistry and social network
analysis. The few existing methods often require unrealistic setups, such as
access to internal information of the victim models, or an impractically-large
number of queries. We present a novel Bayesian optimisation-based attack method
for graph classification models. Our method is black-box, query-efficient and
parsimonious with respect to the perturbation applied. We empirically validate
the effectiveness and flexibility of the proposed method on a wide range of
graph classification tasks involving varying graph properties, constraints and
modes of attack. Finally, we analyse common interpretable patterns behind the
adversarial samples produced, which may shed further light on the adversarial
robustness of graph classification models.
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Revisiting Adversarial Attacks on Graph Neural Networks for Graph
Classification [38.339503144719984]
We present a novel and general framework to generate adversarial examples via manipulating graph structure and node features.
Specifically, we make use of Graph Class Mapping and its variant to produce node-level importance corresponding to the graph classification task.
Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
arXiv Detail & Related papers (2022-08-13T13:41:44Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Reinforcement Learning For Data Poisoning on Graph Neural Networks [0.5156484100374058]
Adversarial Machine Learning has emerged as a substantial subfield of Computer Science.
We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents.
arXiv Detail & Related papers (2021-02-12T22:34:53Z) - Query-free Black-box Adversarial Attacks on Graphs [37.88689315688314]
We propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model.
We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory.
Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well.
arXiv Detail & Related papers (2020-12-12T08:52:56Z) - Node Copying for Protection Against Graph Neural Network Topology
Attacks [24.81359861632328]
In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely.
We propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks.
arXiv Detail & Related papers (2020-07-09T18:09:55Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.