Consistent and Compatible Modelling of Cyber Intrusions and Incident Response Demonstrated in the Context of Malware Attacks on Critical Infrastructure
- URL: http://arxiv.org/abs/2505.16398v1
- Date: Thu, 22 May 2025 08:49:35 GMT
- Title: Consistent and Compatible Modelling of Cyber Intrusions and Incident Response Demonstrated in the Context of Malware Attacks on Critical Infrastructure
- Authors: Peter Maynard, Yulia Cherdantseva, Avi Shaked, Pete Burnap, Arif Mehmood,
- Abstract summary: Cyber Security Incident Response (IR) Playbooks are used to capture the steps required to recover from a cyber intrusion.<n>We take nine critical national infrastructure intrusion models and transform them into models of the same format as IR playbooks.<n>This results in improved intrusion models and tighter coupling between IR playbooks and threat modelling.
- Score: 3.4682832343026826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cyber Security Incident Response (IR) Playbooks are used to capture the steps required to recover from a cyber intrusion. Individual IR playbooks should focus on a specific type of incident and be aligned with the architecture of a system under attack. Intrusion modelling focuses on a specific potential cyber intrusion and is used to identify where and what countermeasures are needed, and the resulting intrusion models are expected to be used in effective IR, ideally by feeding IR Playbooks designs. IR playbooks and intrusion models, however, are created in isolation and at varying stages of the system's lifecycle. We take nine critical national infrastructure intrusion models - expressed using Sequential AND Attack Trees - and transform them into models of the same format as IR playbooks. We use Security Modelling Framework for modelling attacks and playbooks, and for demonstrating the feasibility of the better integration between risk assessment and IR at the modelling level. This results in improved intrusion models and tighter coupling between IR playbooks and threat modelling which - as we demonstrate - yields novel insights into the analysis of attacks and response actions. The main contributions of this paper are (a) a novel way of representing attack trees using the Security Modelling Framework,(b) a new tool for converting Sequential AND attack trees into models compatible with playbooks, and (c) the examples of nine intrusion models represented using the Security Modelling Framework.
Related papers
- World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks [55.90051810762702]
We present a comprehensive overview of world models, highlighting their architecture, training paradigms, and applications across prediction, generation, planning, and causal reasoning.<n>We propose Wireless Dreamer, a novel world model-based reinforcement learning framework tailored for wireless edge intelligence optimization.
arXiv Detail & Related papers (2025-05-31T06:43:00Z) - Towards Scalable and Robust Model Versioning [30.249607205048125]
Malicious incursions aimed at gaining access to deep learning models are on the rise.
We show how to generate multiple versions of a model that possess different attack properties.
We show theoretically that this can be accomplished by incorporating parameterized hidden distributions into the model training data.
arXiv Detail & Related papers (2024-01-17T19:55:49Z) - JAB: Joint Adversarial Prompting and Belief Augmentation [81.39548637776365]
We introduce a joint framework in which we probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation.
This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes.
arXiv Detail & Related papers (2023-11-16T00:35:54Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - FLIRT: Feedback Loop In-context Red Teaming [79.63896510559357]
We propose an automatic red teaming framework that evaluates a given black-box model and exposes its vulnerabilities.
Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation.
arXiv Detail & Related papers (2023-08-08T14:03:08Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - Recent improvements of ASR models in the face of adversarial attacks [28.934863462633636]
Speech Recognition models are vulnerable to adversarial attacks.
We show that the relative strengths of different attack algorithms vary considerably when changing the model architecture.
We release our source code as a package that should help future research in evaluating their attacks and defenses.
arXiv Detail & Related papers (2022-03-29T22:40:37Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - An Automated, End-to-End Framework for Modeling Attacks From
Vulnerability Descriptions [46.40410084504383]
In order to derive a relevant attack graph, up-to-date information on known attack techniques should be represented as interaction rules.
We present a novel, end-to-end, automated framework for modeling new attack techniques from textual description of a security vulnerability.
arXiv Detail & Related papers (2020-08-10T19:27:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.