A Taxonomy of Real-World Defeaters in Safety Assurance Cases
- URL: http://arxiv.org/abs/2502.00238v1
- Date: Sat, 01 Feb 2025 00:38:41 GMT
- Title: A Taxonomy of Real-World Defeaters in Safety Assurance Cases
- Authors: Usman Gohar, Michael C. Hunter, Myra B. Cohen, Robyn R. Lutz,
- Abstract summary: The software engineering community could benefit from having a reusable classification of real-world defeaters in software assurance cases.<n>We derived a taxonomy with seven broad categories, laying the groundwork for standardizing the analysis and management of defeaters in safety-critical systems.
- Score: 4.4398355848251745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of cyber-physical systems in safety-critical domains calls for robust risk-evaluation frameworks. Assurance cases, often required by regulatory bodies, are a structured approach to demonstrate that a system meets its safety requirements. However, assurance cases are fraught with challenges, such as incomplete evidence and gaps in reasoning, called defeaters, that can call into question the credibility and robustness of assurance cases. Identifying these defeaters increases confidence in the assurance case and can prevent catastrophic failures. The search for defeaters in an assurance case, however, is not structured, and there is a need to standardize defeater analysis. The software engineering community thus could benefit from having a reusable classification of real-world defeaters in software assurance cases. In this paper, we conducted a systematic study of literature from the past 20 years. Using open coding, we derived a taxonomy with seven broad categories, laying the groundwork for standardizing the analysis and management of defeaters in safety-critical systems. We provide our artifacts as open source for the community to use and build upon, thus establishing a common framework for understanding defeaters.
Related papers
- Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.
We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.
As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.<n>Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.<n>It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Automating Semantic Analysis of System Assurance Cases using Goal-directed ASP [1.2189422792863451]
We present our approach to enhancing Assurance 2.0 with semantic rule-based analysis capabilities.
We examine the unique semantic aspects of assurance cases, such as logical consistency, adequacy, indefeasibility, etc.
arXiv Detail & Related papers (2024-08-21T15:22:43Z) - CoDefeater: Using LLMs To Find Defeaters in Assurance Cases [4.4398355848251745]
This paper proposes CoDefeater, an automated process to leverage large language models (LLMs) for finding defeaters.
Initial results on two systems show that LLMs can efficiently find known and unforeseen feasible defeaters to support safety analysts.
arXiv Detail & Related papers (2024-07-18T17:16:35Z) - A PRISMA-Driven Bibliometric Analysis of the Scientific Literature on Assurance Case Patterns [7.930875992631788]
Assurance cases can be used to prevent system failure.
They are structured arguments that allow arguing and relaying various safety-critical systems' requirements.
arXiv Detail & Related papers (2024-07-06T05:00:49Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - I came, I saw, I certified: some perspectives on the safety assurance of
cyber-physical systems [5.9395940943056384]
Execution failure of cyber-physical systems could result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss.
It is often mandatory to develop compelling assurance cases to support that justification and allow regulatory bodies to certify such systems.
We explore challenges related to such assurance enablers and outline some potential directions that could be explored to tackle them.
arXiv Detail & Related papers (2024-01-30T00:06:16Z) - The Last Decade in Review: Tracing the Evolution of Safety Assurance
Cases through a Comprehensive Bibliometric Analysis [7.431812376079826]
Safety assurance is of paramount importance across various domains, including automotive, aerospace, and nuclear energy.
The use of safety assurance cases allows for verifying the correctness of the created systems capabilities, preventing system failure.
arXiv Detail & Related papers (2023-11-13T17:34:23Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.