Sustainable Adaptive Security
- URL: http://arxiv.org/abs/2306.04481v1
- Date: Mon, 5 Jun 2023 08:48:36 GMT
- Title: Sustainable Adaptive Security
- Authors: Liliana Pasquale, Kushal Ramkumar, Wanling Cai, John McCarthy, Gavin
Doherty, and Bashar Nuseibeh
- Abstract summary: We propose the notion of Sustainable Adaptive Security (SAS) which reflects enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats.
We use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security.
- Score: 11.574868434725117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With software systems permeating our lives, we are entitled to expect that
such systems are secure by design, and that such security endures throughout
the use of these systems and their subsequent evolution. Although adaptive
security systems have been proposed to continuously protect assets from harm,
they can only mitigate threats arising from changes foreseen at design time. In
this paper, we propose the notion of Sustainable Adaptive Security (SAS) which
reflects such enduring protection by augmenting adaptive security systems with
the capability of mitigating newly discovered threats. To achieve this
objective, a SAS system should be designed by combining automation (e.g., to
discover and mitigate security threats) and human intervention (e.g., to
resolve uncertainties during threat discovery and mitigation). In this paper,
we use a smart home example to showcase how we can engineer the activities of
the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems
satisfying sustainable adaptive security. We suggest that using anomaly
detection together with abductive reasoning can help discover new threats and
guide the evolution of security requirements and controls. We also exemplify
situations when humans can be involved in the execution of the activities of
the MAPE loop and discuss the requirements to engineer human interventions.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - The MESA Security Model 2.0: A Dynamic Framework for Mitigating Stealth Data Exfiltration [0.0]
Stealth Data Exfiltration is a significant cyber threat characterized by covert infiltration, extended undetectability, and unauthorized dissemination of confidential data.
Our findings reveal that conventional defense-in-depth strategies often fall short in combating these sophisticated threats.
As we navigate this complex landscape, it is crucial to anticipate potential threats and continually update our defenses.
arXiv Detail & Related papers (2024-05-17T16:14:45Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Redefining Safety for Autonomous Vehicles [0.9208007322096532]
Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited.
Operation without a human driver dramatically increases the scope of safety concerns.
We propose updated definitions for core system safety concepts.
arXiv Detail & Related papers (2024-04-25T17:22:43Z) - Towards Model Co-evolution Across Self-Adaptation Steps for Combined
Safety and Security Analysis [44.339753503750735]
We present several models that describe different aspects of a self-adaptive system.
We outline our idea of how these models can then be combined into an Attack-Fault Tree.
arXiv Detail & Related papers (2023-09-18T10:35:40Z) - Safety Margins for Reinforcement Learning [74.13100479426424]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.