Structack: Structure-based Adversarial Attacks on Graph Neural Networks
- URL: http://arxiv.org/abs/2107.11327v1
- Date: Fri, 23 Jul 2021 16:17:10 GMT
- Title: Structack: Structure-based Adversarial Attacks on Graph Neural Networks
- Authors: Hussain Hussain, Tomislav Duricic, Elisabeth Lex, Denis Helic, Markus
Strohmaier, Roman Kern
- Abstract summary: We study adversarial attacks that are uninformed, where an attacker only has access to the graph structure, but no information about node attributes.
We show that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient.
We present a new attack strategy on GNNs that we refer to as Structack. Structack can successfully manipulate the performance of GNNs with very limited information while operating under tight computational constraints.
- Score: 1.795391652194214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that graph neural networks (GNNs) are vulnerable to
adversarial attacks on graph data. Common attack approaches are typically
informed, i.e. they have access to information about node attributes such as
labels and feature vectors. In this work, we study adversarial attacks that are
uninformed, where an attacker only has access to the graph structure, but no
information about node attributes. Here the attacker aims to exploit structural
knowledge and assumptions, which GNN models make about graph data. In
particular, literature has shown that structural node centrality and similarity
have a strong influence on learning with GNNs. Therefore, we study the impact
of centrality and similarity on adversarial attacks on GNNs. We demonstrate
that attackers can exploit this information to decrease the performance of GNNs
by focusing on injecting links between nodes of low similarity and,
surprisingly, low centrality. We show that structure-based uninformed attacks
can approach the performance of informed attacks, while being computationally
more efficient. With our paper, we present a new attack strategy on GNNs that
we refer to as Structack. Structack can successfully manipulate the performance
of GNNs with very limited information while operating under tight computational
constraints. Our work contributes towards building more robust machine learning
approaches on graphs.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Node-Level Membership Inference Attacks Against Graph Neural Networks [29.442045622210532]
A new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
Previous studies have shown that machine learning models are vulnerable to privacy attacks.
This paper performs the first comprehensive analysis of node-level membership inference attacks against GNNs.
arXiv Detail & Related papers (2021-02-10T13:51:54Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.