Quantitative analysis of attack-fault trees via Markov decision processes
- URL: http://arxiv.org/abs/2408.06914v1
- Date: Tue, 13 Aug 2024 14:06:07 GMT
- Title: Quantitative analysis of attack-fault trees via Markov decision processes
- Authors: Milan LopuhaƤ-Zwakenberg,
- Abstract summary: We introduce a novel method to find the front between the metrics reliability (safety) and attack cost (security) using Markov decision processes.
This gives us the full interplay between safety and security while being considerably more lightweight and faster than the automaton approach.
- Score: 0.7179506962081079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adequate risk assessment of safety critical systems needs to take both safety and security into account, as well as their interaction. A prominent methodology for modeling safety and security are attack-fault trees (AFTs), which combine the well-established fault tree and attack tree methodologies for safety and security, respectively. AFTs can be used for quantitative analysis as well, capturing the interplay between safety and security metrics. However, existing approaches are based on modeling the AFT as a priced-timed automaton. This allows for a wide range of analyses, but Pareto analsis is still lacking, and analyses that exist are computationally expensive. In this paper, we combine safety and security analysis techniques to introduce a novel method to find the Pareto front between the metrics reliability (safety) and attack cost (security) using Markov decision processes. This gives us the full interplay between safety and security while being considerably more lightweight and faster than the automaton approach. We validate our approach on a case study of cyberattacks on an oil pipe line.
Related papers
- Building a Cybersecurity Risk Metamodel for Improved Method and Tool Integration [0.38073142980732994]
We report on our experience in applying a model-driven approach on the initial risk analysis step in connection with a later security testing.
Our work rely on a common metamodel which is used to map, synchronise and ensure information traceability across different tools.
arXiv Detail & Related papers (2024-09-12T10:18:26Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.