Towards Understanding and Applying Security Assurance Cases for Automotive Systems
- URL: http://arxiv.org/abs/2409.04474v1
- Date: Thu, 5 Sep 2024 12:34:23 GMT
- Title: Towards Understanding and Applying Security Assurance Cases for Automotive Systems
- Authors: Mazen Mohamad,
- Abstract summary: Security Assurance Cases (SAC) are structured bodies of arguments and evidence used to reason about security properties of a certain artefact.
SAC are gaining focus in the automotive domain as the need for security assurance is growing.
We created CASCADE, an approach for creating SAC which have integrated quality assurance.
- Score: 0.2417342411475111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Security Assurance Cases (SAC) are structured bodies of arguments and evidence used to reason about security properties of a certain artefact. SAC are gaining focus in the automotive domain as the need for security assurance is growing due to software becoming a main part of vehicles. Market demands for new services and products in the domain require connectivity, and hence, raise security concerns. Regulators and standardisation bodies started recently to require a structured for security assurance of products in the automotive domain, and automotive companies started, hence, to study ways to create and maintain these cases, as well as adopting them in their current way of working. In order to facilitate the adoption of SAC in the automotive domain, we created CASCADE, an approach for creating SAC which have integrated quality assurance and are compliant with the requirements of ISO/SAE-21434, the upcoming cybersecurity standard for automotive systems. CASCADE was created by conducting design science research study in two iterative cycles. The design decisions of CASCADE are based on insights from a qualitative research study which includes a workshop, a survey, and one-to-one interviews, done in collaboration with our industrial partners about the needs and drivers of work in SAC in industry, and a systematic literature review in which we identified gaps between the industrial needs and the state of the art. The evaluation of CASCADE was done with help of security experts from a large automotive OEM. It showed that CASCADE is suitable for integration in industrial product development processes. Additionally, our results show that the elements of CASCADE align well with respect to the way of working at the company, and has the potential to scale to cover the requirements and needs of the company with its large organization and complex products
Related papers
- Evaluating the Role of Security Assurance Cases in Agile Medical Device Development [2.9790563467999247]
Cybersecurity issues in medical devices threaten patient safety and can cause harm if exploited.
Standards and regulations require vendors of such devices to provide an assessment of the cybersecurity risks as well as a description of their mitigation.
Security assurance cases (SACs) capture these elements as a structured argument.
arXiv Detail & Related papers (2024-07-10T14:34:53Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Managing Security Evidence in Safety-Critical Organizations [10.905169282633256]
This paper presents a study on the maturity of managing security evidence in safety-critical organizations.
We find that the current maturity of managing security evidence is insufficient for the increasing requirements set by certification authorities and standardization bodies.
One part of the reason are educational gaps, the other a lack of processes.
arXiv Detail & Related papers (2024-04-26T11:30:34Z) - Engineering Safety Requirements for Autonomous Driving with Large Language Models [0.6699222582814232]
Large Language Models (LLMs) can play a key role in automatically refining and decomposing requirements after each update.
This study proposes a prototype of a pipeline of prompts and LLMs that receives an item definition and outputs solutions in the form of safety requirements.
arXiv Detail & Related papers (2024-03-24T20:40:51Z) - Service Level Agreements and Security SLA: A Comprehensive Survey [51.000851088730684]
This survey paper identifies state of the art covering concepts, approaches, and open problems of SLA management.
It contributes by carrying out a comprehensive review and covering the gap between the analyses proposed in existing surveys and the most recent literature on this topic.
It proposes a novel classification criterium to organize the analysis based on SLA life cycle phases.
arXiv Detail & Related papers (2024-01-31T12:33:41Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - An Adaptable Approach for Successful SIEM Adoption in Companies [0.3441021278275805]
This paper develops a holistic procedure model for implementing respective SIEM systems in corporations.
According to the study during the validation phase, the procedure model was verified to be applicable.
arXiv Detail & Related papers (2023-08-02T10:28:08Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a
Pedestrian Automatic Emergency Brake System [5.571920596648914]
Integration of Machine Learning (ML) components in critical applications introduces novel challenges for software certification and verification.
New safety standards and technical guidelines are under development to support the safety of ML-based systems.
We report results from an industry-academia collaboration on safety assurance of SMIRK, an ML-based pedestrian automatic emergency braking demonstrator.
arXiv Detail & Related papers (2022-04-16T21:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.