On the complexity of sabotage games for network security
- URL: http://arxiv.org/abs/2312.13132v1
- Date: Wed, 20 Dec 2023 15:52:26 GMT
- Title: On the complexity of sabotage games for network security
- Authors: Dhananjay Raju, Georgios Bakirtzis, Ufuk Topcu,
- Abstract summary: Securing dynamic networks against adversarial actions is challenging because of the need to anticipate and counter strategic disruptions.
Traditional game-theoretic models, while insightful, often fail to model the unpredictability and constraints of real-world threat assessment scenarios.
We refine sabotage games to reflect the realistic limitations of the saboteur and the network operator.
- Score: 18.406992961818368
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Securing dynamic networks against adversarial actions is challenging because of the need to anticipate and counter strategic disruptions by adversarial entities within complex network structures. Traditional game-theoretic models, while insightful, often fail to model the unpredictability and constraints of real-world threat assessment scenarios. We refine sabotage games to reflect the realistic limitations of the saboteur and the network operator. By transforming sabotage games into reachability problems, our approach allows applying existing computational solutions to model realistic restrictions on attackers and defenders within the game. Modifying sabotage games into dynamic network security problems successfully captures the nuanced interplay of strategy and uncertainty in dynamic network security. Theoretically, we extend sabotage games to model network security contexts and thoroughly explore if the additional restrictions raise their computational complexity, often the bottleneck of game theory in practical contexts. Practically, this research sets the stage for actionable insights for developing robust defense mechanisms by understanding what risks to mitigate in dynamically changing networks under threat.
Related papers
- CyGATE: Game-Theoretic Cyber Attack-Defense Engine for Patch Strategy Optimization [73.13843039509386]
This paper presents CyGATE, a game-theoretic framework modeling attacker-defender interactions.<n>CyGATE frames cyber conflicts as a partially observable game (POSG) across Cyber Kill Chain stages.<n>The framework's flexible architecture enables extension to multi-agent scenarios.
arXiv Detail & Related papers (2025-08-01T09:53:06Z) - A Multi-Resolution Dynamic Game Framework for Cross-Echelon Decision-Making in Cyber Warfare [15.972165653254734]
Cyber warfare has become a critical dimension of modern conflict, driven by society's increasing dependence on interconnected digital and physical infrastructure.<n>Effective cyber defense often requires decision-making at different echelons, where the tactical layer focuses on detailed actions such as techniques, tactics, and procedures, while the strategic layer addresses long-term objectives and coordinated planning.<n>We propose a multi-resolution dynamic game framework in which the tactical layer captures fine-grained interactions using high-resolution extensive-form game trees.<n>This framework supports scalable reasoning and planning across different levels of abstraction through zoom-in and zoom-out operations that adjust the granularity of
arXiv Detail & Related papers (2025-07-02T17:42:34Z) - General Autonomous Cybersecurity Defense: Learning Robust Policies for Dynamic Topologies and Diverse Attackers [3.6956995102043164]
autonomous cybersecurity defense (ACD) systems have become essential for real-time threat detection and response with optional human intervention.<n>Existing ACD systems rely on limiting assumptions, particularly the stationarity of the underlying network dynamics.<n>This work explores methods for developing agents to learn generalizable policies across dynamic network environments.
arXiv Detail & Related papers (2025-06-28T01:12:13Z) - Quantitative Resilience Modeling for Autonomous Cyber Defense [7.6078202493877205]
Cyber resilience is the ability of a system to recover from an attack with minimal impact on system operations.
There are no formal definitions of resilience applicable to diverse network topologies and attack patterns.
We propose a quantifiable formulation of resilience that considers multiple defender operational goals.
arXiv Detail & Related papers (2025-03-04T16:52:25Z) - A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures [0.0]
Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity.
Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering.
Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks.
arXiv Detail & Related papers (2024-12-18T14:21:46Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Towards Improving Robustness Against Common Corruptions in Object
Detectors Using Adversarial Contrastive Learning [10.27974860479791]
This paper proposes an innovative adversarial contrastive learning framework to enhance neural network robustness simultaneously against adversarial attacks and common corruptions.
By focusing on improving performance under adversarial and real-world conditions, our approach aims to bolster the robustness of neural networks in safety-critical applications.
arXiv Detail & Related papers (2023-11-14T06:13:52Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - The Critical Node Game [7.392707962173127]
We introduce a game-theoretic model that assesses the cyber-security risk of cloud networks.
Our approach aims to minimize the unexpected network disruptions caused by malicious cyber-attacks under uncertainty.
arXiv Detail & Related papers (2023-03-10T14:48:32Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Autonomous Attack Mitigation for Industrial Control Systems [25.894883701063055]
Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence.
We present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks.
arXiv Detail & Related papers (2021-11-03T18:08:06Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Network Defense is Not a Game [0.0]
Research seeks to apply Artificial Intelligence to scale and extend the capabilities of human operators to defend networks.
Our position is that network defense is better characterized as a collection of games with uncertain and possibly drifting rules.
We propose to define network defense tasks as distributions of network environments.
arXiv Detail & Related papers (2021-04-20T21:52:51Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.