GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
- URL: http://arxiv.org/abs/2006.08149v3
- Date: Wed, 28 Oct 2020 04:54:36 GMT
- Title: GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
- Authors: Xiang Zhang, Marinka Zitnik
- Abstract summary: We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
- Score: 16.941548115261433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods for graphs achieve remarkable performance across a
variety of domains. However, recent findings indicate that small, unnoticeable
perturbations of graph structure can catastrophically reduce performance of
even the strongest and most popular Graph Neural Networks (GNNs). Here, we
develop GNNGuard, a general algorithm to defend against a variety of
training-time attacks that perturb the discrete graph structure. GNNGuard can
be straight-forwardly incorporated into any GNN. Its core principle is to
detect and quantify the relationship between the graph structure and node
features, if one exists, and then exploit that relationship to mitigate
negative effects of the attack.GNNGuard learns how to best assign higher
weights to edges connecting similar nodes while pruning edges between unrelated
nodes. The revised edges allow for robust propagation of neural messages in the
underlying GNN. GNNGuard introduces two novel components, the neighbor
importance estimation, and the layer-wise graph memory, and we show empirically
that both components are necessary for a successful defense. Across five GNNs,
three defense methods, and five datasets,including a challenging human disease
graph, experiments show that GNNGuard outperforms existing defense approaches
by 15.3% on average. Remarkably, GNNGuard can effectively restore
state-of-the-art performance of GNNs in the face of various adversarial
attacks, including targeted and non-targeted attacks, and can defend against
attacks on heterophily graphs.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph
Neural Networks [15.448462928073635]
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data.
Recent studies show that GNNs are vulnerable to graph adversarial attacks.
We propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models.
arXiv Detail & Related papers (2022-01-30T06:32:44Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.