An ontological lens on attack trees: Toward adequacy and interoperability
- URL: http://arxiv.org/abs/2506.23841v1
- Date: Mon, 30 Jun 2025 13:31:34 GMT
- Title: An ontological lens on attack trees: Toward adequacy and interoperability
- Authors: Ítalo Oliveira, Stefano M. Nicoletti, Gal Engelberg, Mattia Fumagalli, Dan Klein, Giancarlo Guizzardi,
- Abstract summary: Attack Trees (ATs) are a popular formalism for security analysis.<n>ATs offer conceptual modeling capabilities for representing security risk management scenarios.<n>Still, the AT language presents limitations due to its lack of ontological foundations.
- Score: 27.953120979995557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attack Trees (AT) are a popular formalism for security analysis. They are meant to display an attacker's goal decomposed into attack steps needed to achieve it and compute certain security metrics (e.g., attack cost, probability, and damage). ATs offer three important services: (a) conceptual modeling capabilities for representing security risk management scenarios, (b) a qualitative assessment to find root causes and minimal conditions of successful attacks, and (c) quantitative analyses via security metrics computation under formal semantics, such as minimal time and cost among all attacks. Still, the AT language presents limitations due to its lack of ontological foundations, thus compromising associated services. Via an ontological analysis grounded in the Common Ontology of Value and Risk (COVER) -- a reference core ontology based on the Unified Foundational Ontology (UFO) -- we investigate the ontological adequacy of AT and reveal four significant shortcomings: (1) ambiguous syntactical terms that can be interpreted in various ways; (2) ontological deficit concerning crucial domain-specific concepts; (3) lacking modeling guidance to construct ATs decomposing a goal; (4) lack of semantic interoperability, resulting in ad hoc stand-alone tools. We also discuss existing incremental solutions and how our analysis paves the way for overcoming those issues through a broader approach to risk management modeling.
Related papers
- A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - Modeling Interdependent Cybersecurity Threats Using Bayesian Networks: A Case Study on In-Vehicle Infotainment Systems [0.0]
This paper reviews the application of Bayesian Networks (BNs) in cybersecurity risk modeling.<n>A case study is presented in which a STRIDE-based attack tree for an automotive In-Vehicle Infotainment (IVI) system is transformed into a BN.
arXiv Detail & Related papers (2025-05-14T01:04:45Z) - Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation [52.626086874715284]
We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
arXiv Detail & Related papers (2025-05-08T13:29:46Z) - Context-Awareness and Interpretability of Rare Occurrences for Discovery and Formalization of Critical Failure Modes [3.140125449151061]
Vision systems are increasingly deployed in critical domains such as surveillance, law enforcement, and transportation.<n>To address these challenges, we introduce Context-Awareness and Interpretability of Rare Occurrences (CAIRO)<n>CAIRO incentivizes human-in-the-loop for testing and evaluation of criticality that arises from misdetections, adversarial attacks, and hallucinations in AI black-box models.
arXiv Detail & Related papers (2025-04-18T17:12:37Z) - Temporal Context Awareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models [0.0]
Large Language Models (LLMs) are increasingly vulnerable to sophisticated multi-turn manipulation attacks.<n>This paper introduces the Temporal Context Awareness framework, a novel defense mechanism designed to address this challenge.<n>Preliminary evaluations on simulated adversarial scenarios demonstrate the framework's potential to identify subtle manipulation patterns.
arXiv Detail & Related papers (2025-03-18T22:30:17Z) - Beyond the Surface: An NLP-based Methodology to Automatically Estimate CVE Relevance for CAPEC Attack Patterns [42.63501759921809]
We propose a methodology leveraging Natural Language Processing (NLP) to associate Common Vulnerabilities and Exposure (CAPEC) vulnerabilities with Common Attack Patternion and Classification (CAPEC) attack patterns.<n> Experimental evaluations demonstrate superior performance compared to state-of-the-art models.
arXiv Detail & Related papers (2025-01-13T08:39:52Z) - WATCHDOG: an ontology-aWare risk AssessmenT approaCH via object-oriented DisruptiOn Graphs [0.9387233631570749]
The Common Ontology of Value and Risk (COVER) highlights how the role of objects and their relationships remains pivotal to performing transparent, complete and accountable risk assessment.<n>We operationalize some of the notions proposed by COVER by presenting a new framework for risk assessment: WATCHDOG.
arXiv Detail & Related papers (2024-12-18T15:44:04Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Semantic Image Attack for Visual Model Diagnosis [80.36063332820568]
In practice, metric analysis on a specific train and test dataset does not guarantee reliable or fair ML models.
This paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images.
arXiv Detail & Related papers (2023-03-23T03:13:04Z) - Towards automation of threat modeling based on a semantic model of
attack patterns and weaknesses [0.0]
This work considers challenges of building and usage a formal knowledge base (model)
The proposed model can be used to learn relations between techniques, attack pattern, weaknesses, and vulnerabilities in order to build various threat landscapes.
arXiv Detail & Related papers (2021-12-08T11:13:47Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.