Elevating Software Trust: Unveiling and Quantifying the Risk Landscape
- URL: http://arxiv.org/abs/2408.02876v2
- Date: Tue, 24 Dec 2024 01:22:23 GMT
- Title: Elevating Software Trust: Unveiling and Quantifying the Risk Landscape
- Authors: Sarah Ali Siddiqui, Chandra Thapa, Rayne Holland, Wei Shao, Seyit Camtepe,
- Abstract summary: We propose a risk assessment framework called SAFER (Software Analysis Framework for Evaluating Risk)<n>This framework is based on the necessity of a dynamic, data-driven, and adaptable process to quantify security risk in the software supply chain.<n>The results suggest that SAFER mitigates subjectivity and yields dynamic data-driven weights as well as security risk scores.
- Score: 9.428116807615407
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Considering the ever-evolving threat landscape and rapid changes in software development, we propose a risk assessment framework called SAFER (Software Analysis Framework for Evaluating Risk). This framework is based on the necessity of a dynamic, data-driven, and adaptable process to quantify security risk in the software supply chain. Usually, when formulating such frameworks, static pre-defined weights are assigned to reflect the impact of each contributing parameter while aggregating these individual parameters to compute resulting security risk scores. This leads to inflexibility, a lack of adaptability, and reduced accuracy, making them unsuitable for the changing nature of the digital world. We adopt a novel perspective by examining security risk through the lens of trust and incorporating the human aspect. Moreover, we quantify security risk associated with individual software by assessing and formulating risk elements quantitatively and exploring dynamic data-driven weight assignment. This enhances the sensitivity of the framework to cater to the evolving security risk factors associated with software development and the different actors involved in the entire process. The devised framework is tested through a dataset containing 9000 samples, comprehensive scenarios, assessments, and expert opinions. Furthermore, a comparison between scores computed by the OpenSSF scorecard, OWASP risk calculator, and the proposed SAFER framework has also been presented. The results suggest that SAFER mitigates subjectivity and yields dynamic data-driven weights as well as security risk scores.
Related papers
- Adapting Probabilistic Risk Assessment for AI [0.0]
General-purpose artificial intelligence (AI) systems present an urgent risk management challenge.
Current methods often rely on selective testing and undocumented assumptions about risk priorities.
This paper introduces the probabilistic risk assessment (PRA) for AI framework.
arXiv Detail & Related papers (2025-04-25T17:59:14Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation [0.0]
The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development.
The cloud cluster was deployed using Terraform and the Jenkins pipeline, which checks program code for vulnerabilities.
The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology.
arXiv Detail & Related papers (2024-12-15T13:11:48Z) - A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis [0.6199770411242359]
This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
arXiv Detail & Related papers (2024-09-17T14:18:21Z) - Robust Reinforcement Learning with Dynamic Distortion Risk Measures [0.0]
We devise a framework to solve robust risk-aware reinforcement learning problems.
We simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures.
We construct an actor-critic algorithm to solve this class of robust risk-aware RL problems.
arXiv Detail & Related papers (2024-09-16T08:54:59Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - A robust statistical framework for cyber-vulnerability prioritisation under partial information in threat intelligence [0.0]
This work introduces a robust statistical framework for quantitative and qualitative reasoning under uncertainty about cyber-vulnerabilities.
We identify a novel accuracy measure suited for rank in variance under partial knowledge of the whole set of existing vulnerabilities.
We discuss the implications of partial knowledge about cyber-vulnerabilities on threat intelligence and decision-making in operational scenarios.
arXiv Detail & Related papers (2023-02-16T15:05:43Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - RASR: Risk-Averse Soft-Robust MDPs with EVaR and Entropic Risk [28.811725782388688]
We propose and analyze a new framework to jointly model the risk associated with uncertainties in finite-horizon and discounted infinite-horizon MDPs.
We show that when the risk-aversion is defined using either EVaR or the entropic risk, the optimal policy in RASR can be computed efficiently using a new dynamic program formulation with a time-dependent risk level.
arXiv Detail & Related papers (2022-09-09T00:34:58Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Modeling and mitigation of occupational safety risks in dynamic
industrial environments [0.0]
This article proposes a method to enable continuous and quantitative assessment of safety risks in a data-driven manner.
A fully Bayesian approach is developed to calibrate this model from safety data in an online fashion.
The proposed model can be leveraged for automated decision making.
arXiv Detail & Related papers (2022-05-02T13:04:25Z) - Sample-Based Bounds for Coherent Risk Measures: Applications to Policy
Synthesis and Verification [32.9142708692264]
This paper aims to address a few problems regarding risk-aware verification and policy synthesis.
First, we develop a sample-based method to evaluate a subset of a random variable distribution.
Second, we develop a robotic-based method to determine solutions to problems that outperform a large fraction of the decision space.
arXiv Detail & Related papers (2022-04-21T01:06:10Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z) - Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability [0.5156484100374058]
We propose a wrapper that enriches its output prediction with a measure of uncertainty.
Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions.
Results demonstrate the effectiveness of the uncertainty computed by the wrapper.
arXiv Detail & Related papers (2019-12-29T11:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.