A PRISMA-Driven Bibliometric Analysis of the Scientific Literature on Assurance Case Patterns
- URL: http://arxiv.org/abs/2407.04961v1
- Date: Sat, 6 Jul 2024 05:00:49 GMT
- Title: A PRISMA-Driven Bibliometric Analysis of the Scientific Literature on Assurance Case Patterns
- Authors: Oluwafemi Odu, Alvine Boaye Belle, Song Wang, Kimya Khakzad Shahandashti,
- Abstract summary: Assurance cases can be used to prevent system failure.
They are structured arguments that allow arguing and relaying various safety-critical systems' requirements.
- Score: 7.930875992631788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Justifying the correct implementation of the non-functional requirements (e.g., safety, security) of mission-critical systems is crucial to prevent system failure. The later could have severe consequences such as the death of people and financial losses. Assurance cases can be used to prevent system failure, They are structured arguments that allow arguing and relaying various safety-critical systems' requirements extensively as well as checking the compliance of such systems with industrial standards to support their certification. Still, the creation of assurance cases is usually manual, error-prone, and time-consuming. Besides, it may involve numerous alterations as the system evolves. To overcome the bottlenecks in creating assurance cases, existing approaches usually promote the reuse of common structured evidence-based arguments (i.e. patterns) to aid the creation of assurance cases. To gain insights into the advancements of the research on assurance case patterns, we relied on SEGRESS to conduct a bibliometric analysis of 92 primary studies published within the past two decades. This allows capturing the evolutionary trends and patterns characterizing the research in that field. Our findings notably indicate the emergence of new assurance case patterns to support the assurance of ML-enabled systems that are characterized by their evolving requirements (e.g., cybersecurity and ethics).
Related papers
- CoDefeater: Using LLMs To Find Defeaters in Assurance Cases [4.4398355848251745]
This paper proposes CoDefeater, an automated process to leverage large language models (LLMs) for finding defeaters.
Initial results on two systems show that LLMs can efficiently find known and unforeseen feasible defeaters to support safety analysts.
arXiv Detail & Related papers (2024-07-18T17:16:35Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness and robustness in detecting harmful backdoor prompts.
We present ReMiss, a system for automated red teaming that generates adversarial prompts against various target aligned LLMs.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection [0.0]
We consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model.
We establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection.
arXiv Detail & Related papers (2024-03-12T11:38:45Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - I came, I saw, I certified: some perspectives on the safety assurance of
cyber-physical systems [5.9395940943056384]
Execution failure of cyber-physical systems could result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss.
It is often mandatory to develop compelling assurance cases to support that justification and allow regulatory bodies to certify such systems.
We explore challenges related to such assurance enablers and outline some potential directions that could be explored to tackle them.
arXiv Detail & Related papers (2024-01-30T00:06:16Z) - A PRISMA-driven systematic mapping study on system assurance weakeners [0.8493449152820131]
We aim to initiate the first comprehensive systematic mapping study on assurance weakeners.
We searched for primary studies in five digital libraries and focused on the 2012-2023 publication year range.
Our selection criteria focused on studies addressing assurance weakeners at the modeling level.
arXiv Detail & Related papers (2023-11-14T17:17:16Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.