strideSEA: A STRIDE-centric Security Evaluation Approach
- URL: http://arxiv.org/abs/2503.19030v1
- Date: Mon, 24 Mar 2025 18:00:17 GMT
- Title: strideSEA: A STRIDE-centric Security Evaluation Approach
- Authors: Alvi Jawad, Jason Jaskolka, Ashraf Matrawy, Mohamed Ibnkahla,
- Abstract summary: strideSEA integrates STRIDE as the central classification scheme into the security activities of threat modeling, attack scenario analysis, risk analysis, and countermeasure recommendation.<n>The application of strideSEA is demonstrated in a real-world online immunization system case study.
- Score: 1.996354642790599
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Microsoft's STRIDE methodology is at the forefront of threat modeling, supporting the increasingly critical quality attribute of security in software-intensive systems. However, in a comprehensive security evaluation process, the general consensus is that the STRIDE classification is only useful for threat elicitation, isolating threat modeling from the other security evaluation activities involved in a secure software development life cycle (SDLC). We present strideSEA, a STRIDE-centric Security Evaluation Approach that integrates STRIDE as the central classification scheme into the security activities of threat modeling, attack scenario analysis, risk analysis, and countermeasure recommendation that are conducted alongside software engineering activities in secure SDLCs. The application of strideSEA is demonstrated in a real-world online immunization system case study. Using STRIDE as a single unifying thread, we bind existing security evaluation approaches in the four security activities of strideSEA to analyze (1) threats using Microsoft threat modeling tool, (2) attack scenarios using attack trees, (3) systemic risk using NASA's defect detection and prevention (DDP) technique, and (4) recommend countermeasures based on their effectiveness in reducing the most critical risks using DDP. The results include a detailed quantitative assessment of the security of the online immunization system with a clear definition of the role and advantages of integrating STRIDE in the evaluation process. Overall, the unified approach in strideSEA enables a more structured security evaluation process, allowing easier identification and recommendation of countermeasures, thus supporting the security requirements and eliciting design considerations, informing the software development life cycle of future software-based information systems.
Related papers
- AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data Security and Privacy [3.537571223616615]
Remote healthcare delivery has introduced significant security and privacy risks to protected health information (PHI)
This study investigates the root causes of such security incidents and introduces the Attacker-Centric Approach (ACA)
ACA addresses limitations in existing threat models and regulatory frameworks by adopting a holistic attacker-focused perspective.
arXiv Detail & Related papers (2024-12-18T02:21:53Z) - Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation [0.0]
The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development.<n>The cloud cluster was deployed using Terraform and the Jenkins pipeline, which checks program code for vulnerabilities.<n>The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology.
arXiv Detail & Related papers (2024-12-15T13:11:48Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Building a Cybersecurity Risk Metamodel for Improved Method and Tool Integration [0.38073142980732994]
We report on our experience in applying a model-driven approach on the initial risk analysis step in connection with a later security testing.
Our work rely on a common metamodel which is used to map, synchronise and ensure information traceability across different tools.
arXiv Detail & Related papers (2024-09-12T10:18:26Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.