Attack Tree Generation via Process Mining
- URL: http://arxiv.org/abs/2402.12040v2
- Date: Thu, 12 Sep 2024 07:25:03 GMT
- Title: Attack Tree Generation via Process Mining
- Authors: Alyzia-Maria Konsta, Gemma Di Federico, Alberto Lluch Lafuente, Andrea Burattin,
- Abstract summary: This work aims to provide a method for the automatic generation of Attack Trees from attack logs.
The main original feature of our approach is the use of Process Mining algorithms to synthesize Attack Trees.
Our approach is supported by a prototype that, apart from the derivation and translation of the model, provides the user with an Attack Tree in the RisQFLan format.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Attack Trees are a graphical model of security used to study threat scenarios. While visually appealing and supported by solid theories and effective tools, one of their main drawbacks remains the amount of effort required by security experts to design them from scratch. This work aims to remedy this by providing a method for the automatic generation of Attack Trees from attack logs. The main original feature of our approach w.r.t existing ones is the use of Process Mining algorithms to synthesize Attack Trees, which allow users to customize the way a set of logs are summarized as an Attack Tree, for example by discarding statistically irrelevant events. Our approach is supported by a prototype that, apart from the derivation and translation of the model, provides the user with an Attack Tree in the RisQFLan format, a tool used for quantitative risk modeling and analysis with Attack Trees. We illustrate our approach with the case study of attacks on a communication protocol, produced by a state-of-the-art protocol analyzer.
Related papers
- Automated generation of attack trees with optimal shape and labelling [0.9833293669382975]
This article addresses the problem of automatically generating attack trees that soundly and clearly describe the ways the system can be attacked.
We introduce an attack-tree generation algorithm that minimises the tree size and the information length of its labels without sacrificing correctness.
arXiv Detail & Related papers (2023-11-22T11:52:51Z) - Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z) - SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
Sparsification [24.053704318868043]
In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks by uploading "poisoned" updates.
We introduce algoname, a novel defense that uses global top-k update sparsification and device-level clipping gradient to mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-12-12T16:34:52Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing [65.32148145602865]
deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
arXiv Detail & Related papers (2021-05-17T00:31:37Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z) - Certifying Decision Trees Against Evasion Attacks by Program Analysis [9.290879387995401]
We propose a novel technique to verify the security of machine learning models against evasion attacks.
Our approach exploits the interpretability property of decision trees to transform them into imperative programs.
Our experiments show that our technique is both precise and efficient, yielding only a minimal number of false positives.
arXiv Detail & Related papers (2020-07-06T14:18:10Z) - Feature Partitioning for Robust Tree Ensembles and their Certification
in Adversarial Scenarios [8.300942601020266]
We focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at test time.
We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset.
Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker.
arXiv Detail & Related papers (2020-04-07T12:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.