SPEAR: Security Posture Evaluation using AI Planner-Reasoning on Attack-Connectivity Hypergraphs
- URL: http://arxiv.org/abs/2506.01227v1
- Date: Mon, 02 Jun 2025 00:38:47 GMT
- Title: SPEAR: Security Posture Evaluation using AI Planner-Reasoning on Attack-Connectivity Hypergraphs
- Authors: Rakesh Podder, Turgay Caglar, Shadaab Kawnain Bashir, Sarath Sreedharan, Indrajit Ray, Indrakshi Ray,
- Abstract summary: SPEAR is a formal framework with tool support for security posture evaluation and analysis.<n>It uses the causal formalism of AI planning to model vulnerabilities and configurations in a networked system.<n>It identifies a set of diverse security hardening strategies that can be presented in a manner understandable to the domain expert.
- Score: 15.590901699441645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-based frameworks are often used in network hardening to help a cyber defender understand how a network can be attacked and how the best defenses can be deployed. However, incorporating network connectivity parameters in the attack graph, reasoning about the attack graph when we do not have access to complete information, providing system administrator suggestions in an understandable format, and allowing them to do what-if analysis on various scenarios and attacker motives is still missing. We fill this gap by presenting SPEAR, a formal framework with tool support for security posture evaluation and analysis that keeps human-in-the-loop. SPEAR uses the causal formalism of AI planning to model vulnerabilities and configurations in a networked system. It automatically converts network configurations and vulnerability descriptions into planning models expressed in the Planning Domain Definition Language (PDDL). SPEAR identifies a set of diverse security hardening strategies that can be presented in a manner understandable to the domain expert. These allow the administrator to explore the network hardening solution space in a systematic fashion and help evaluate the impact and compare the different solutions.
Related papers
- Integrating Network and Attack Graphs for Service-Centric Impact Analysis [0.0]
We present a novel methodology for modelling, visualising, and analysing cyber threats, attack paths, and their impact on user services in networks of digital devices and services they provide.<n>Using probabilistic methods to track the propagation of an attack through attack graphs, via the service or application layers, and on physical communication networks, our model enables us to analyse cyber attacks at different levels of detail.
arXiv Detail & Related papers (2025-07-01T10:29:45Z) - Modeling Interdependent Cybersecurity Threats Using Bayesian Networks: A Case Study on In-Vehicle Infotainment Systems [0.0]
This paper reviews the application of Bayesian Networks (BNs) in cybersecurity risk modeling.<n>A case study is presented in which a STRIDE-based attack tree for an automotive In-Vehicle Infotainment (IVI) system is transformed into a BN.
arXiv Detail & Related papers (2025-05-14T01:04:45Z) - An Attentive Graph Agent for Topology-Adaptive Cyber Defence [1.0812794909131096]
We develop a custom version of the Cyber Operations Research Gym (CybORG) environment, encoding network state as a directed graph.<n>We employ a Graph Attention Network (GAT) architecture to process node, edge, and global features, and adapt its output to be compatible with policy gradient methods in reinforcement learning.<n>We demonstrate that GAT defensive policies can be trained using our low-level directed graph observations, even when unexpected connections arise during simulation.
arXiv Detail & Related papers (2025-01-24T18:22:37Z) - Exploring Answer Set Programming for Provenance Graph-Based Cyber Threat Detection: A Novel Approach [4.302577059401172]
Provenance graphs are useful tools for representing system-level activities in cybersecurity.<n>This paper presents a novel approach using ASP to model and analyze provenance graphs.
arXiv Detail & Related papers (2025-01-24T14:57:27Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to support cyber risk assessment for multi-step attacks on computer networks.
Current solutions propose to address the generation problem from the algorithmic perspective and postulate the analysis only after the generation is complete.
This paper rethinks the classic AG analysis through a novel workflow in which the analyst can query the system anytime.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - What Planning Problems Can A Relational Neural Network Solve? [91.53684831950612]
We present a circuit complexity analysis for relational neural networks representing policies for planning problems.
We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth.
We also illustrate the utility of this analysis for designing neural networks for policy learning.
arXiv Detail & Related papers (2023-12-06T18:47:28Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.