The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games
- URL: http://arxiv.org/abs/2202.13424v2
- Date: Tue, 1 Mar 2022 04:14:44 GMT
- Title: The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games
- Authors: Thanh H. Nguyen and Arunesh Sinha
- Abstract summary: This paper studies the problem of multi-step manipulative attacks in Stackelberg security games.
A clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior.
This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit.
- Score: 8.87104231451079
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper studies the problem of multi-step manipulative attacks in
Stackelberg security games, in which a clever attacker attempts to orchestrate
its attacks over multiple time steps to mislead the defender's learning of the
attacker's behavior. This attack manipulation eventually influences the
defender's patrol strategy towards the attacker's benefit. Previous work along
this line of research only focuses on one-shot games in which the defender
learns the attacker's behavior and then designs a corresponding strategy only
once. Our work, on the other hand, investigates the long-term impact of the
attacker's manipulation in which current attack and defense choices of players
determine the future learning and patrol planning of the defender. This paper
has three key contributions. First, we introduce a new multi-step manipulative
attack game model that captures the impact of sequential manipulative attacks
carried out by the attacker over the entire time horizon. Second, we propose a
new algorithm to compute an optimal manipulative attack plan for the attacker,
which tackles the challenge of multiple connected optimization components
involved in the computation across multiple time steps. Finally, we present
extensive experimental results on the impact of such misleading attacks,
showing a significant benefit for the attacker and loss for the defender.
Related papers
- On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Adversarial Machine Learning and Defense Game for NextG Signal
Classification with Deep Learning [1.1726528038065764]
NextG systems can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users.
This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification.
arXiv Detail & Related papers (2022-12-22T15:13:03Z) - Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning [102.05872020792603]
We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients.
We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds.
arXiv Detail & Related papers (2022-10-17T17:59:38Z) - A Game-Theoretic Approach for AI-based Botnet Attack Defence [5.020067709306813]
New generation of botnets leverage Artificial Intelligent (AI) techniques to conceal the identity of botmasters and the attack intention to avoid detection.
There has not been an existing assessment tool capable of evaluating the effectiveness of existing defense strategies against this kind of AI-based botnet attack.
We propose a sequential game theory model that is capable to analyse the details of the potential strategies botnet attackers and defenders could use to reach Nash Equilibrium (NE)
arXiv Detail & Related papers (2021-12-04T02:53:40Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Unrestricted Adversarial Attacks on ImageNet Competition [70.8952435964555]
Unrestricted adversarial attack is popular and practical direction but has not been studied thoroughly.
We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm.
arXiv Detail & Related papers (2021-10-17T04:27:15Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.