Assessing the Understandability and Acceptance of Attack-Defense Trees for Modelling Security Requirements
- URL: http://arxiv.org/abs/2404.06386v1
- Date: Tue, 9 Apr 2024 15:28:31 GMT
- Title: Assessing the Understandability and Acceptance of Attack-Defense Trees for Modelling Security Requirements
- Authors: Giovanna Broccia, Maurice H. ter Beek, Alberto Lluch Lafuente, Paola Spoletini, Alessio Ferrari,
- Abstract summary: Attack-Defense Trees (ADTs) are a graphical notation used to model and assess security requirements.
We conduct an experiment with 25 human subjects to assess the understandability and user acceptance of the ADT notation.
- Score: 4.133045842892995
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Context and Motivation Attack-Defense Trees (ADTs) are a graphical notation used to model and assess security requirements. ADTs are widely popular, as they can facilitate communication between different stakeholders involved in system security evaluation, and they are formal enough to be verified, e.g., with model checkers. Question/Problem While the quality of this notation has been primarily assessed quantitatively, its understandability has never been evaluated despite being mentioned as a key factor for its success. Principal idea/Results In this paper, we conduct an experiment with 25 human subjects to assess the understandability and user acceptance of the ADT notation. The study focuses on performance-based variables and perception-based variables, with the aim of evaluating the relationship between these measures and how they might impact the practical use of the notation. The results confirm a good level of understandability of ADTs. Participants consider them useful, and they show intention to use them. Contribution This is the first study empirically supporting the understandability of ADTs, thereby contributing to the theory of security requirements engineering.
Related papers
- A Critical Evaluation of Defenses against Prompt Injection Attacks [95.81023801370073]
Large Language Models (LLMs) are vulnerable to prompt injection attacks.<n>Several defenses have recently been proposed, often claiming to mitigate these attacks successfully.<n>We argue that existing studies lack a principled approach to evaluating these defenses.
arXiv Detail & Related papers (2025-05-23T19:39:56Z) - Measurement to Meaning: A Validity-Centered Framework for AI Evaluation [12.55408229639344]
We provide a structured approach for reasoning about the types of evaluative claims that can be made given the available evidence.<n>Our framework is well-suited for the contemporary paradigm in machine learning.
arXiv Detail & Related papers (2025-05-13T20:36:22Z) - Explanation Regularisation through the Lens of Attributions [30.68740512996253]
Explanation regularisation (ER) has been introduced as a way to guide text classifiers to form their predictions.
This is achieved by introducing an auxiliary explanation loss that measures how well the output of an input attribution technique agrees with human-annotated rationales.
Previous work has under-explored the impact of guidance on that reliance, particularly when reliance is measured using attribution techniques different from those used to guide the model.
arXiv Detail & Related papers (2024-07-23T17:56:32Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Towards Effective Evaluations and Comparisons for LLM Unlearning Methods [97.2995389188179]
This paper seeks to refine the evaluation of machine unlearning for large language models.
It addresses two key challenges -- the robustness of evaluation metrics and the trade-offs between competing goals.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis [12.921307214813357]
The aim of the paper is to come up with a novel explanatory technique called SHifted Adversaries using Pixel Elimination.
We show that SHAPE is, infact, an adversarial explanation that fools causal metrics that are employed to measure the robustness and reliability of popular importance based visual XAI methods.
arXiv Detail & Related papers (2024-06-12T02:39:46Z) - Towards a Unified Framework for Evaluating Explanations [0.6138671548064356]
We argue that explanations serve as mediators between models and stakeholders, whether for intrinsically interpretable models or opaque black-box models.
We illustrate these criteria, as well as specific evaluation methods, using examples from an ongoing study of an interpretable neural network for predicting a particular learner behavior.
arXiv Detail & Related papers (2024-05-22T21:49:28Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - On the Definition of Appropriate Trust and the Tools that Come with it [0.0]
This paper starts with the definitions of appropriate trust from the literature.
It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation.
The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.
arXiv Detail & Related papers (2023-09-21T09:52:06Z) - Goodhart's Law Applies to NLP's Explanation Benchmarks [57.26445915212884]
We critically examine two sets of metrics: the ERASER metrics (comprehensiveness and sufficiency) and the EVAL-X metrics.
We show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs.
Our results raise doubts about the ability of current metrics to guide explainability research, underscoring the need for a broader reassessment of what precisely these metrics are intended to capture.
arXiv Detail & Related papers (2023-08-28T03:03:03Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Foreseeing the Benefits of Incidental Supervision [83.08441990812636]
This paper studies whether we can, in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through experiments.
We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals.
arXiv Detail & Related papers (2020-06-09T20:59:42Z) - What's a Good Prediction? Challenges in evaluating an agent's knowledge [0.9281671380673306]
We show the conflict between accuracy and usefulness of general knowledge.
We propose an alternate evaluation approach that arises continually in the online continual learning setting.
This paper contributes a first look into evaluation of predictions through their use.
arXiv Detail & Related papers (2020-01-23T21:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.