Managing Security Evidence in Safety-Critical Organizations
- URL: http://arxiv.org/abs/2404.17332v1
- Date: Fri, 26 Apr 2024 11:30:34 GMT
- Title: Managing Security Evidence in Safety-Critical Organizations
- Authors: Mazen Mohamad, Jan-Philipp Steghöfer, Eric Knauss, Riccardo Scandariato,
- Abstract summary: This paper presents a study on the maturity of managing security evidence in safety-critical organizations.
We find that the current maturity of managing security evidence is insufficient for the increasing requirements set by certification authorities and standardization bodies.
One part of the reason are educational gaps, the other a lack of processes.
- Score: 10.905169282633256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing prevalence of open and connected products, cybersecurity has become a serious issue in safety-critical domains such as the automotive industry. As a result, regulatory bodies have become more stringent in their requirements for cybersecurity, necessitating security assurance for products developed in these domains. In response, companies have implemented new or modified processes to incorporate security into their product development lifecycle, resulting in a large amount of evidence being created to support claims about the achievement of a certain level of security. However, managing evidence is not a trivial task, particularly for complex products and systems. This paper presents a qualitative interview study conducted in six companies on the maturity of managing security evidence in safety-critical organizations. We find that the current maturity of managing security evidence is insufficient for the increasing requirements set by certification authorities and standardization bodies. Organisations currently fail to identify relevant artifacts as security evidence and manage this evidence on an organizational level. One part of the reason are educational gaps, the other a lack of processes. The impact of AI on the management of security evidence is still an open question
Related papers
- SoK: Identifying Limitations and Bridging Gaps of Cybersecurity Capability Maturity Models (CCMMs) [1.2016264781280588]
Cybersecurity Capability Maturity Models ( CCMMs) emerge as pivotal tools in enhancing organisational cybersecurity posture.
CCMMs provide a structured framework to guide organisations in assessing their current cybersecurity capabilities, identifying critical gaps, and prioritising improvements.
However, the full potential of CCMMs is often not realised due to inherent limitations within the models and challenges encountered during their implementation and adoption processes.
arXiv Detail & Related papers (2024-08-28T21:00:20Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Critical Infrastructure Security: Penetration Testing and Exploit Development Perspectives [0.0]
This paper reviews literature on critical infrastructure security, focusing on penetration testing and exploit development.
Findings of this paper reveal inherent vulnerabilities in critical infrastructure and sophisticated threats posed by cyber adversaries.
The review underscores the necessity of continuous and proactive security assessments.
arXiv Detail & Related papers (2024-07-24T13:17:07Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Sok: Comprehensive Security Overview, Challenges, and Future Directions of Voice-Controlled Systems [10.86045604075024]
The integration of Voice Control Systems into smart devices accentuates the importance of their security.
Current research has uncovered numerous vulnerabilities in VCS, presenting significant risks to user privacy and security.
This study introduces a hierarchical model structure for VCS, providing a novel lens for categorizing and analyzing existing literature in a systematic manner.
We classify attacks based on their technical principles and thoroughly evaluate various attributes, such as their methods, targets, vectors, and behaviors.
arXiv Detail & Related papers (2024-05-27T12:18:46Z) - Enhancing Energy Sector Resilience: Integrating Security by Design Principles [20.817229569050532]
Security by design (Sbd) is a concept for developing and maintaining systems that are impervious to security attacks.
This document presents the security requirements for the implementation of the SbD in industrial control systems.
arXiv Detail & Related papers (2024-02-18T11:04:22Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - sec-certs: Examining the security certification practice for better vulnerability mitigation [0.2886273197127056]
Critical vulnerabilities get discovered in certified products with high assurance levels.
Assessing which certified products are impacted by such vulnerabilities is complicated due to the large amount of unstructured certification-related data.
We trained unsupervised models to learn which vulnerabilities from NIST's National Vulnerability Database impact existing certified products.
arXiv Detail & Related papers (2023-11-29T12:55:16Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.