Node Copying for Protection Against Graph Neural Network Topology
Attacks
- URL: http://arxiv.org/abs/2007.06704v1
- Date: Thu, 9 Jul 2020 18:09:55 GMT
- Title: Node Copying for Protection Against Graph Neural Network Topology
Attacks
- Authors: Florence Regol, Soumyasundar Pal and Mark Coates
- Abstract summary: In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely.
We propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks.
- Score: 24.81359861632328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks can affect the performance of existing deep learning
models. With the increased interest in graph based machine learning techniques,
there have been investigations which suggest that these models are also
vulnerable to attacks. In particular, corruptions of the graph topology can
degrade the performance of graph based learning algorithms severely. This is
due to the fact that the prediction capability of these algorithms relies
mostly on the similarity structure imposed by the graph connectivity.
Therefore, detecting the location of the corruption and correcting the induced
errors becomes crucial. There has been some recent work which tackles the
detection problem, however these methods do not address the effect of the
attack on the downstream learning task. In this work, we propose an algorithm
that uses node copying to mitigate the degradation in classification that is
caused by adversarial attacks. The proposed methodology is applied only after
the model for the downstream task is trained and the added computation cost
scales well for large graphs. Experimental results show the effectiveness of
our approach for several real world datasets.
Related papers
- Perseus: Leveraging Common Data Patterns with Curriculum Learning for More Robust Graph Neural Networks [4.196444883507288]
Graph Neural Networks (GNNs) excel at handling graph data but remain vulnerable to adversarial attacks.
We propose Perseus, a novel adversarial defense method based on curriculum learning.
Experiments show Perseus achieves superior performance and are significantly more robust to adversarial attacks.
arXiv Detail & Related papers (2024-10-16T10:08:02Z) - Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Exploring High-Order Structure for Robust Graph Structure Learning [33.62223306095631]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack, i.e., an imperceptible structure perturbation can fool GNNs to make wrong predictions.
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness.
We propose a novel algorithm that incorporates the high-order structural information into the graph structure learning.
arXiv Detail & Related papers (2022-03-22T07:03:08Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.