QuADTool: Attack-Defense-Tree Synthesis, Analysis and Bridge to Verification
- URL: http://arxiv.org/abs/2406.15605v3
- Date: Tue, 17 Sep 2024 21:47:45 GMT
- Title: QuADTool: Attack-Defense-Tree Synthesis, Analysis and Bridge to Verification
- Authors: Florian Dorfhuber, Julia Eisentraut, Katharina Klioba, Jan Kretinsky,
- Abstract summary: We provide a tool that allows for easy synthesis and analysis of attack-defense trees.
It provides a variety of interfaces to existing model checkers and analysis tools.
As a part of our tool, we extend the standard analysis techniques so they can handle the PAC input and yield rigorous bounds on the imprecision and uncertainty of the final result.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Ranking risks and countermeasures is one of the foremost goals of quantitative security analysis. One of the popular frameworks, used also in industrial practice, for this task are attack-defense trees. Standard quantitative analyses available for attack-defense trees can distinguish likely from unlikely vulnerabilities. We provide a tool that allows for easy synthesis and analysis of those models, also featuring probabilities, costs and time. Furthermore, it provides a variety of interfaces to existing model checkers and analysis tools. Unfortunately, currently available tools rely on precise quantitative inputs (probabilities, timing, or costs of attacks), which are rarely available. Instead, only statistical, imprecise information is typically available, leaving us with probably approximately correct (PAC) estimates of the real quantities. As a part of our tool, we extend the standard analysis techniques so they can handle the PAC input and yield rigorous bounds on the imprecision and uncertainty of the final result of the analysis.
Related papers
- Leveraging Slither and Interval Analysis to build a Static Analysis Tool [0.0]
This paper presents our progress toward finding defects that are sometimes not detected or completely detected by state-of-the-art analysis tools.
We developed a working solution built on top of Slither that uses interval analysis to evaluate the contract state during the execution of each instruction.
arXiv Detail & Related papers (2024-10-31T09:28:09Z) - Attack Tree Generation via Process Mining [0.0]
This work aims to provide a method for the automatic generation of Attack Trees from attack logs.
The main original feature of our approach is the use of Process Mining algorithms to synthesize Attack Trees.
Our approach is supported by a prototype that, apart from the derivation and translation of the model, provides the user with an Attack Tree in the RisQFLan format.
arXiv Detail & Related papers (2024-02-19T10:55:49Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to support cyber risk assessment for multi-step attacks on computer networks.
Current solutions propose to address the generation problem from the algorithmic perspective and postulate the analysis only after the generation is complete.
This paper rethinks the classic AG analysis through a novel workflow in which the analyst can query the system anytime.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - Differential privacy and robust statistics in high dimensions [49.50869296871643]
High-dimensional Propose-Test-Release (HPTR) builds upon three crucial components: the exponential mechanism, robust statistics, and the Propose-Test-Release mechanism.
We show that HPTR nearly achieves the optimal sample complexity under several scenarios studied in the literature.
arXiv Detail & Related papers (2021-11-12T06:36:40Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z) - RobustBench: a standardized adversarial robustness benchmark [84.50044645539305]
Key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.
We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks.
We analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.
arXiv Detail & Related papers (2020-10-19T17:06:18Z) - Certifying Decision Trees Against Evasion Attacks by Program Analysis [9.290879387995401]
We propose a novel technique to verify the security of machine learning models against evasion attacks.
Our approach exploits the interpretability property of decision trees to transform them into imperative programs.
Our experiments show that our technique is both precise and efficient, yielding only a minimal number of false positives.
arXiv Detail & Related papers (2020-07-06T14:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.