A Theory of Hypergames on Graphs for Synthesizing Dynamic Cyber Defense
with Deception
- URL: http://arxiv.org/abs/2008.03210v1
- Date: Fri, 7 Aug 2020 14:59:28 GMT
- Title: A Theory of Hypergames on Graphs for Synthesizing Dynamic Cyber Defense
with Deception
- Authors: Abhishek N. Kulkarni and Jie Fu
- Abstract summary: We present an approach using formal methods to synthesize reactive defense strategy in a cyber network, equipped with a set of decoy systems.
We first generalize formal graphical security models--attack graphs--to incorporate defender's countermeasures in a game-theoretic model, called an attack-defend game on graph.
We introduce a class of hypergames to model asymmetric information created by decoys in the attacker-defender interactions.
- Score: 24.11353445650682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this chapter, we present an approach using formal methods to synthesize
reactive defense strategy in a cyber network, equipped with a set of decoy
systems. We first generalize formal graphical security models--attack
graphs--to incorporate defender's countermeasures in a game-theoretic model,
called an attack-defend game on graph. This game captures the dynamic
interactions between the defender and the attacker and their defense/attack
objectives in formal logic. Then, we introduce a class of hypergames to model
asymmetric information created by decoys in the attacker-defender interactions.
Given qualitative security specifications in formal logic, we show that the
solution concepts from hypergames and reactive synthesis in formal methods can
be extended to synthesize effective dynamic defense strategy using cyber
deception. The strategy takes the advantages of the misperception of the
attacker to ensure security specification is satisfied, which may not be
satisfiable when the information is symmetric.
Related papers
- MAGIC: A Co-Evolving Attacker-Defender Adversarial Game for Robust LLM Safety [28.246225272659917]
This paper introduces textbfMAGIC, a novel multi-turn multi-agent reinforcement learning framework.<n>It formulates Large Language Models safety alignment as an adversarial asymmetric game.<n>Our framework demonstrates superior defense success rates without compromising the helpfulness of the model.
arXiv Detail & Related papers (2026-02-02T02:12:28Z) - Attacking and Securing Community Detection: A Game-Theoretic Framework [22.20017945724223]
adversarial graphs can cause deep graph models to fail on classification tasks.<n>We propose novel attack and defense techniques for community detection problem.<n>These techniques have many applications in real-world scenarios, for example, protecting personal privacy in social networks.
arXiv Detail & Related papers (2025-12-12T08:17:33Z) - CyGATE: Game-Theoretic Cyber Attack-Defense Engine for Patch Strategy Optimization [73.13843039509386]
This paper presents CyGATE, a game-theoretic framework modeling attacker-defender interactions.<n>CyGATE frames cyber conflicts as a partially observable game (POSG) across Cyber Kill Chain stages.<n>The framework's flexible architecture enables extension to multi-agent scenarios.
arXiv Detail & Related papers (2025-08-01T09:53:06Z) - Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models [55.28518567702213]
Conventional language model (LM) safety alignment relies on a reactive, disjoint procedure: attackers exploit a static model, followed by defensive fine-tuning to patch exposed vulnerabilities.<n>This sequential approach creates a mismatch -- attackers overfit to obsolete defenses, while defenders perpetually lag behind emerging threats.<n>We propose Self-RedTeam, an online self-play reinforcement learning algorithm where an attacker and defender agent co-evolve through continuous interaction.
arXiv Detail & Related papers (2025-06-09T06:35:12Z) - Concealment of Intent: A Game-Theoretic Analysis [15.387256204743407]
We present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills.<n>Our analysis identifies equilibrium points and reveals structural advantages for the attacker.<n> Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors.
arXiv Detail & Related papers (2025-05-27T07:59:56Z) - Learn to Disguise: Avoid Refusal Responses in LLM's Defense via a Multi-agent Attacker-Disguiser Game [28.33029508522531]
Malicious attackers induce large models to jailbreak and generate information containing illegal, privacy-invasive information.
Large models counter malicious attackers' attacks using techniques such as safety alignment.
We propose a multi-agent attacker-disguiser game approach to achieve a weak defense mechanism that allows the large model to both safely reply to the attacker and hide the defense intent.
arXiv Detail & Related papers (2024-04-03T07:43:11Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Are Defenses for Graph Neural Networks Robust? [72.1389952286628]
We show that most Graph Neural Networks (GNNs) defenses show no or only marginal improvement compared to an undefended baseline.
We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
Our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
arXiv Detail & Related papers (2023-01-31T15:11:48Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - I Know What You Trained Last Summer: A Survey on Stealing Machine
Learning Models and Defences [0.1031296820074812]
We study model stealing attacks, assessing their performance and exploring corresponding defence techniques in different settings.
We propose a taxonomy for attack and defence approaches, and provide guidelines on how to select the right attack or defence based on the goal and available resources.
arXiv Detail & Related papers (2022-06-16T21:16:41Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Learning Generative Deception Strategies in Combinatorial Masking Games [27.2744631811653]
One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
arXiv Detail & Related papers (2021-09-23T20:42:44Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.