Projective Ranking-based GNN Evasion Attacks
- URL: http://arxiv.org/abs/2202.12993v1
- Date: Fri, 25 Feb 2022 21:52:09 GMT
- Title: Projective Ranking-based GNN Evasion Attacks
- Authors: He Zhang, Xingliang Yuan, Chuan Zhou, Shirui Pan
- Abstract summary: Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
- Score: 52.85890533994233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) offer promising learning methods for
graph-related tasks. However, GNNs are at risk of adversarial attacks. Two
primary limitations of the current evasion attack methods are highlighted: (1)
The current GradArgmax ignores the "long-term" benefit of the perturbation. It
is faced with zero-gradient and invalid benefit estimates in certain
situations. (2) In the reinforcement learning-based attack methods, the learned
attack strategies might not be transferable when the attack budget changes. To
this end, we first formulate the perturbation space and propose an evaluation
framework and the projective ranking method. We aim to learn a powerful attack
strategy then adapt it as little as possible to generate adversarial samples
under dynamic budget settings. In our method, based on mutual information, we
rank and assess the attack benefits of each perturbation for an effective
attack strategy. By projecting the strategy, our method dramatically minimizes
the cost of learning a new attack strategy when the attack budget changes. In
the comparative assessment with GradArgmax and RL-S2V, the results show our
method owns high attack performance and effective transferability. The
visualization of our method also reveals various attack patterns in the
generation of adversarial samples.
Related papers
- Adversarial Attacks on Online Learning to Rank with Stochastic Click
Models [34.725468803108754]
We propose the first study of adversarial attacks on online learning to rank.
The goal of the adversary is to misguide the online learning to rank algorithm to place the target item on top of the ranking list linear times to time horizon $T$ with a sublinear attack cost.
arXiv Detail & Related papers (2023-05-30T17:05:49Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Adversarial example generation with AdaBelief Optimizer and Crop
Invariance [8.404340557720436]
Adversarial attacks can be an important method to evaluate and select robust models in safety-critical applications.
We propose AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improve the transferability of adversarial examples.
Our method has higher success rates than state-of-the-art gradient-based attack methods.
arXiv Detail & Related papers (2021-02-07T06:00:36Z) - Progressive Defense Against Adversarial Attacks for Deep Learning as a
Service in Internet of Things [9.753864027359521]
Some Deep Neural Networks (DNN) can be easily misled by adding relatively small but adversarial perturbations to the input.
We present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations.
The result shows it outperforms the state-of-the-art while reducing the cost of model training by 50% on average.
arXiv Detail & Related papers (2020-10-15T06:40:53Z) - Stealthy and Efficient Adversarial Attacks against Deep Reinforcement
Learning [30.46580767540506]
We introduce two novel adversarial attack techniques to emphstealthily and emphefficiently attack the Deep Reinforcement Learning agents.
The first technique is the emphcritical point attack: the adversary builds a model to predict the future environmental states and agent's actions, assesses the damage of each possible attack strategy, and selects the optimal one.
The second technique is the emphantagonist attack: the adversary automatically learns a domain-agnostic model to discover the critical moments of attacking the agent in an episode.
arXiv Detail & Related papers (2020-05-14T16:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.