A PRISMA-driven systematic mapping study on system assurance weakeners
- URL: http://arxiv.org/abs/2311.08328v1
- Date: Tue, 14 Nov 2023 17:17:16 GMT
- Title: A PRISMA-driven systematic mapping study on system assurance weakeners
- Authors: Kimya Khakzad Shahandashti, Alvine B. Belle, Timothy C. Lethbridge,
Oluwafemi Odu, Mithila Sivakumar
- Abstract summary: We aim to initiate the first comprehensive systematic mapping study on assurance weakeners.
We searched for primary studies in five digital libraries and focused on the 2012-2023 publication year range.
Our selection criteria focused on studies addressing assurance weakeners at the modeling level.
- Score: 0.8493449152820131
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Context: An assurance case is a structured hierarchy of claims aiming at
demonstrating that a given mission-critical system supports specific
requirements (e.g., safety, security, privacy). The presence of assurance
weakeners (i.e., assurance deficits, logical fallacies) in assurance cases
reflects insufficient evidence, knowledge, or gaps in reasoning. These
weakeners can undermine confidence in assurance arguments, potentially
hindering the verification of mission-critical system capabilities.
Objectives: As a stepping stone for future research on assurance weakeners,
we aim to initiate the first comprehensive systematic mapping study on this
subject. Methods: We followed the well-established PRISMA 2020 and SEGRESS
guidelines to conduct our systematic mapping study. We searched for primary
studies in five digital libraries and focused on the 2012-2023 publication year
range. Our selection criteria focused on studies addressing assurance weakeners
at the modeling level, resulting in the inclusion of 39 primary studies in our
systematic review.
Results: Our systematic mapping study reports a taxonomy (map) that provides
a uniform categorization of assurance weakeners and approaches proposed to
manage them at the modeling level.
Conclusion: Our study findings suggest that the SACM (Structured Assurance
Case Metamodel) -- a standard specified by the OMG (Object Management Group) --
may be the best specification to capture structured arguments and reason about
their potential assurance weakeners.
Related papers
- A PRISMA-Driven Bibliometric Analysis of the Scientific Literature on Assurance Case Patterns [7.930875992631788]
Assurance cases can be used to prevent system failure.
They are structured arguments that allow arguing and relaying various safety-critical systems' requirements.
arXiv Detail & Related papers (2024-07-06T05:00:49Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - A Survey on Safe Multi-Modal Learning System [10.914595812695218]
multimodal learning systems (MMLS) have gained traction for their ability to process and integrate information from diverse modality inputs.
The absence of systematic research into their safety is a significant barrier to progress in this field.
We present the first taxonomy that systematically categorizes and assesses MMLS safety.
arXiv Detail & Related papers (2024-02-08T02:27:13Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - Integrating Testing and Operation-related Quantitative Evidences in
Assurance Cases to Argue Safety of Data-Driven AI/ML Components [2.064612766965483]
In the future, AI will increasingly find its way into systems that can potentially cause physical harm to humans.
For such safety-critical systems, it must be demonstrated that their residual risk does not exceed what is acceptable.
This paper proposes a more holistic argumentation structure for having achieved the target.
arXiv Detail & Related papers (2022-02-10T20:35:25Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Adversarial Attacks against Face Recognition: A Comprehensive Study [3.766020696203255]
Face recognition (FR) systems have demonstrated outstanding verification performance.
Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images.
arXiv Detail & Related papers (2020-07-22T22:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.