The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games
- URL: http://arxiv.org/abs/2202.13424v2
- Date: Tue, 1 Mar 2022 04:14:44 GMT
- Title: The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games
- Authors: Thanh H. Nguyen and Arunesh Sinha
- Abstract summary: This paper studies the problem of multi-step manipulative attacks in Stackelberg security games.
A clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior.
This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit.
- Score: 8.87104231451079
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper studies the problem of multi-step manipulative attacks in
Stackelberg security games, in which a clever attacker attempts to orchestrate
its attacks over multiple time steps to mislead the defender's learning of the
attacker's behavior. This attack manipulation eventually influences the
defender's patrol strategy towards the attacker's benefit. Previous work along
this line of research only focuses on one-shot games in which the defender
learns the attacker's behavior and then designs a corresponding strategy only
once. Our work, on the other hand, investigates the long-term impact of the
attacker's manipulation in which current attack and defense choices of players
determine the future learning and patrol planning of the defender. This paper
has three key contributions. First, we introduce a new multi-step manipulative
attack game model that captures the impact of sequential manipulative attacks
carried out by the attacker over the entire time horizon. Second, we propose a
new algorithm to compute an optimal manipulative attack plan for the attacker,
which tackles the challenge of multiple connected optimization components
involved in the computation across multiple time steps. Finally, we present
extensive experimental results on the impact of such misleading attacks,
showing a significant benefit for the attacker and loss for the defender.
Related papers
- Towards Efficient Transferable Preemptive Adversarial Defense [13.252842556505174]
Deep learning technology has become untrustworthy because of its sensitivity to perturbations.
We have devised a strategy for "attacking" the message before it is attacked.
With the running of only three steps, our Fast Preemption framework outperforms benchmark training-time, test-time, and preemptive adversarial defenses.
arXiv Detail & Related papers (2024-07-22T10:23:44Z) - Multi-Trigger Backdoor Attacks: More Triggers, More Threats [71.08081471803915]
We investigate the practical threat of backdoor attacks under the setting of textbfmulti-trigger attacks
By proposing and investigating three types of multi-trigger attacks, we provide a set of important understandings of the coexisting, overwriting, and cross-activating effects between different triggers on the same dataset.
We create a multi-trigger backdoor poisoning dataset to help future evaluation of backdoor attacks and defenses.
arXiv Detail & Related papers (2024-01-27T04:49:37Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning [102.05872020792603]
We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients.
We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds.
arXiv Detail & Related papers (2022-10-17T17:59:38Z) - A Game-Theoretic Approach for AI-based Botnet Attack Defence [5.020067709306813]
New generation of botnets leverage Artificial Intelligent (AI) techniques to conceal the identity of botmasters and the attack intention to avoid detection.
There has not been an existing assessment tool capable of evaluating the effectiveness of existing defense strategies against this kind of AI-based botnet attack.
We propose a sequential game theory model that is capable to analyse the details of the potential strategies botnet attackers and defenders could use to reach Nash Equilibrium (NE)
arXiv Detail & Related papers (2021-12-04T02:53:40Z) - Unrestricted Adversarial Attacks on ImageNet Competition [70.8952435964555]
Unrestricted adversarial attack is popular and practical direction but has not been studied thoroughly.
We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm.
arXiv Detail & Related papers (2021-10-17T04:27:15Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.