Simulation of Attacker Defender Interaction in a Noisy Security Game
- URL: http://arxiv.org/abs/2212.04281v1
- Date: Thu, 8 Dec 2022 14:18:44 GMT
- Title: Simulation of Attacker Defender Interaction in a Noisy Security Game
- Authors: Erick Galinkin and Emmanouil Pountourakis and John Carter and Spiros
Mancoridis
- Abstract summary: We introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment.
We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes.
There is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes.
- Score: 1.967117164081002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the cybersecurity setting, defenders are often at the mercy of their
detection technologies and subject to the information and experiences that
individual analysts have. In order to give defenders an advantage, it is
important to understand an attacker's motivation and their likely next best
action. As a first step in modeling this behavior, we introduce a security game
framework that simulates interplay between attackers and defenders in a noisy
environment, focusing on the factors that drive decision making for attackers
and defenders in the variants of the game with full knowledge and
observability, knowledge of the parameters but no observability of the state
(``partial knowledge''), and zero knowledge or observability (``zero
knowledge''). We demonstrate the importance of making the right assumptions
about attackers, given significant differences in outcomes. Furthermore, there
is a measurable trade-off between false-positives and true-positives in terms
of attacker outcomes, suggesting that a more false-positive prone environment
may be acceptable under conditions where true-positives are also higher.
Related papers
- Guarding Against Malicious Biased Threats (GAMBiT): Experimental Design of Cognitive Sensors and Triggers with Behavioral Impact Analysis [17.809804870177192]
GAMBiT embeds insights from cognitive science into cyber environments through cognitive triggers.<n>GAMBiT establishes a new paradigm in which the attacker's mind becomes part of the battlefield.
arXiv Detail & Related papers (2025-11-27T02:18:03Z) - On the Trade-Off Between Transparency and Security in Adversarial Machine Learning [19.827079641936837]
We investigate the strategic effect of transparency for agents through the lens of transferable adversarial example attacks.<n>In transferable adversarial example attacks, attackers maliciously perturb their inputs using surrogate models to fool a defender's target model.<n>We find that attackers are more successful when they match the defender's decision.
arXiv Detail & Related papers (2025-11-14T20:05:50Z) - Bi-Level Game-Theoretic Planning of Cyber Deception for Cognitive Arbitrage [22.661656301757663]
This paper investigates how to exploit the cognitive vulnerabilities of Advanced Persistent Threat (APT) attackers.<n>It proposes cognition-aware defenses that leverage windows of superiority to counteract attacks.
arXiv Detail & Related papers (2025-09-05T21:11:25Z) - Preliminary Investigation into Uncertainty-Aware Attack Stage Classification [81.28215542218724]
This work addresses the problem of attack stage inference under uncertainty.<n>We propose a classification approach based on Evidential Deep Learning (EDL), which models predictive uncertainty by outputting parameters of a Dirichlet distribution over possible stages.<n>Preliminary experiments in a simulated environment demonstrate that the proposed model can accurately infer the stage of an attack with confidence.
arXiv Detail & Related papers (2025-08-01T06:58:00Z) - Interpreting Agent Behaviors in Reinforcement-Learning-Based Cyber-Battle Simulation Platforms [5.743789620999628]
We analyze two open source deep reinforcement learning agents submitted to the CAGE Challenge 2 cyber defense challenge.<n>We demonstrate that one can gain interpretability of agent successes and failures by simplifying the complex state and action spaces.<n>We discuss the realism of the challenge and ways that the CAGE Challenge 4 has addressed some of our concerns.
arXiv Detail & Related papers (2025-06-09T20:07:26Z) - Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models [55.28518567702213]
Conventional language model (LM) safety alignment relies on a reactive, disjoint procedure: attackers exploit a static model, followed by defensive fine-tuning to patch exposed vulnerabilities.<n>This sequential approach creates a mismatch -- attackers overfit to obsolete defenses, while defenders perpetually lag behind emerging threats.<n>We propose Self-RedTeam, an online self-play reinforcement learning algorithm where an attacker and defender agent co-evolve through continuous interaction.
arXiv Detail & Related papers (2025-06-09T06:35:12Z) - Concealment of Intent: A Game-Theoretic Analysis [15.387256204743407]
We present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills.<n>Our analysis identifies equilibrium points and reveals structural advantages for the attacker.<n> Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors.
arXiv Detail & Related papers (2025-05-27T07:59:56Z) - Modeling Behavioral Preferences of Cyber Adversaries Using Inverse Reinforcement Learning [4.5456862813416565]
This paper presents a holistic approach to attacker preference modeling from system-level audit logs using inverse reinforcement learning (IRL)<n>We learn the behavioral preferences of cyber adversaries from forensics data on their tools and techniques.<n>Our results demonstrate for the first time that low-level forensics data can automatically reveal an adversary's subjective preferences.
arXiv Detail & Related papers (2025-05-02T18:20:14Z) - Deep Learning Model Security: Threats and Defenses [25.074630770554105]
Deep learning has transformed AI applications but faces critical security challenges.
This survey examines these vulnerabilities, detailing their mechanisms and impact on model integrity and confidentiality.
The survey concludes with future directions, emphasizing automated defenses, zero-trust architectures, and the security challenges of large AI models.
arXiv Detail & Related papers (2024-12-12T06:04:20Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - A Quantal Response Analysis of Defender-Attacker Sequential Security Games [1.3022753212679383]
We explore a scenario involving two sites and a sequential game between a defender and an attacker.
The attacker's objective is to target the site that maximizes the expected loss for the defender, taking into account the defender's security investments.
We consider quantal behavioral bias, where humans make errors in selecting efficient (pure) strategies.
arXiv Detail & Related papers (2024-08-02T00:40:48Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning
in Cybersecurity Games [1.14219428942199]
We present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning.
This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions.
Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles.
arXiv Detail & Related papers (2023-06-03T17:51:04Z) - Protecting Split Learning by Potential Energy Loss [70.81375125791979]
We focus on the privacy leakage from the forward embeddings of split learning.
We propose the potential energy loss to make the forward embeddings become more 'complicated'
arXiv Detail & Related papers (2022-10-18T06:21:11Z) - A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game [153.74942025516853]
The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
arXiv Detail & Related papers (2022-09-13T05:59:02Z) - Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual
Patterns [18.694795507945603]
Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks.
This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical object in the environment.
arXiv Detail & Related papers (2021-09-16T04:59:06Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - Protecting Classifiers From Attacks. A Bayesian Approach [0.9449650062296823]
We provide an alternative Bayesian framework that accounts for the lack of precise knowledge about the attacker's behavior using adversarial risk analysis.
We propose a sampling procedure based on approximate Bayesian computation, in which we simulate the attacker's problem taking into account our uncertainty about his elements.
For large scale problems, we propose an alternative, scalable approach that could be used when dealing with differentiable classifiers.
arXiv Detail & Related papers (2020-04-18T21:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.