GUARD: Graph Universal Adversarial Defense
- URL: http://arxiv.org/abs/2204.09803v4
- Date: Sat, 12 Aug 2023 10:03:40 GMT
- Title: GUARD: Graph Universal Adversarial Defense
- Authors: Jintang Li, Jie Liao, Ruofan Wu, Liang Chen, Zibin Zheng, Jiawang Dan,
Changhua Meng, Weiqiang Wang
- Abstract summary: We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
- Score: 54.81496179947696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) have been shown to be vulnerable to small
adversarial perturbations, which becomes a severe threat and largely limits
their applications in security-critical scenarios. To mitigate such a threat,
considerable research efforts have been devoted to increasing the robustness of
GCNs against adversarial attacks. However, current defense approaches are
typically designed to prevent GCNs from untargeted adversarial attacks and
focus on overall performance, making it challenging to protect important local
nodes from more powerful targeted adversarial attacks. Additionally, a
trade-off between robustness and performance is often made in existing
research. Such limitations highlight the need for developing an effective and
efficient approach that can defend local nodes against targeted attacks,
without compromising the overall performance of GCNs. In this work, we present
a simple yet effective method, named Graph Universal Adversarial Defense
(GUARD). Unlike previous works, GUARD protects each individual node from
attacks with a universal defensive patch, which is generated once and can be
applied to any node (node-agnostic) in a graph. GUARD is fast, straightforward
to implement without any change to network architecture nor any additional
parameters, and is broadly applicable to any GCNs. Extensive experiments on
four benchmark datasets demonstrate that GUARD significantly improves
robustness for several established GCNs against multiple adversarial attacks
and outperforms state-of-the-art defense methods by large margins.
Related papers
- Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Spatio-Temporal Sparsification for General Robust Graph Convolution
Networks [16.579675313683627]
Graph Neural Networks (GNNs) have attracted increasing attention due to its successful applications on various graph-structure data.
Recent studies have shown that adversarial attacks are threatening the functionality of GNNs.
We propose to defend adversarial attacks on GNN through applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN hidden node representation.
arXiv Detail & Related papers (2021-03-23T02:03:11Z) - GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [16.941548115261433]
We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
arXiv Detail & Related papers (2020-06-15T06:07:46Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.