Dynamic Risk Management in Cyber Physical Systems
- URL: http://arxiv.org/abs/2401.13539v2
- Date: Mon, 6 May 2024 14:53:47 GMT
- Title: Dynamic Risk Management in Cyber Physical Systems
- Authors: Daniel Schneider, Jan Reich, Rasmus Adler, Peter Liggesmeyer,
- Abstract summary: This paper structures safety assurance challenges of cooperative automated CPS.
It provides an overview on our vision of dynamic risk management and describes already existing building blocks.
- Score: 1.7687476868741456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber Physical Systems (CPS) enable new kinds of applications as well as significant improvements of existing ones in numerous different application domains. A major trait of upcoming CPS is an increasing degree of automation up to the point of autonomy, as there is a huge potential for economic success as well as for ecologic and societal improvements. However, to unlock the full potential of such (cooperative and automated) CPS, we first need to overcome several significant engineering challenges, where safety assurance is a particularly important one. Unfortunately, established safety assurance methods and standards do not live up to this task, as they have been designed with closed and less complex systems in mind. This paper structures safety assurance challenges of cooperative automated CPS, provides an overview on our vision of dynamic risk management and describes already existing building blocks.
Related papers
- EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - I came, I saw, I certified: some perspectives on the safety assurance of
cyber-physical systems [5.9395940943056384]
Execution failure of cyber-physical systems could result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss.
It is often mandatory to develop compelling assurance cases to support that justification and allow regulatory bodies to certify such systems.
We explore challenges related to such assurance enablers and outline some potential directions that could be explored to tackle them.
arXiv Detail & Related papers (2024-01-30T00:06:16Z) - Future Vision of Dynamic Certification Schemes for Autonomous Systems [3.151005833357807]
We identify several issues with the current certification strategies that could pose serious safety risks.
We highlight the inadequate reflection of software changes in constantly evolving systems and the lack of support for systems' cooperation.
Other shortcomings include the narrow focus of awarded certification, neglecting aspects such as the ethical behavior of autonomous software systems.
arXiv Detail & Related papers (2023-08-20T19:06:57Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Collaborative AI Needs Stronger Assurances Driven by Risks [5.657409854809804]
Collaborative AI systems (CAISs) aim at working together with humans in a shared space to achieve a common goal.
Building such systems with strong assurances of compliance with requirements, domain-specific standards and regulations is of greatest importance.
arXiv Detail & Related papers (2021-12-01T15:24:21Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.