Sustainable Adaptive Security
- URL: http://arxiv.org/abs/2306.04481v1
- Date: Mon, 5 Jun 2023 08:48:36 GMT
- Title: Sustainable Adaptive Security
- Authors: Liliana Pasquale, Kushal Ramkumar, Wanling Cai, John McCarthy, Gavin
Doherty, and Bashar Nuseibeh
- Abstract summary: We propose the notion of Sustainable Adaptive Security (SAS) which reflects enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats.
We use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security.
- Score: 11.574868434725117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With software systems permeating our lives, we are entitled to expect that
such systems are secure by design, and that such security endures throughout
the use of these systems and their subsequent evolution. Although adaptive
security systems have been proposed to continuously protect assets from harm,
they can only mitigate threats arising from changes foreseen at design time. In
this paper, we propose the notion of Sustainable Adaptive Security (SAS) which
reflects such enduring protection by augmenting adaptive security systems with
the capability of mitigating newly discovered threats. To achieve this
objective, a SAS system should be designed by combining automation (e.g., to
discover and mitigate security threats) and human intervention (e.g., to
resolve uncertainties during threat discovery and mitigation). In this paper,
we use a smart home example to showcase how we can engineer the activities of
the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems
satisfying sustainable adaptive security. We suggest that using anomaly
detection together with abductive reasoning can help discover new threats and
guide the evolution of security requirements and controls. We also exemplify
situations when humans can be involved in the execution of the activities of
the MAPE loop and discuss the requirements to engineer human interventions.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Realizable Continuous-Space Shields for Safe Reinforcement Learning [13.728961635717134]
Deep Reinforcement Learning (DRL) remains vulnerable to occasional catastrophic failures without additional safeguards.
One effective solution is to use a shield that validates and adjusts the agent's actions to ensure compliance with a provided set of safety specifications.
We propose the first shielding approach to automatically guarantee the realizability of safety requirements for continuous state and action spaces.
arXiv Detail & Related papers (2024-10-02T21:08:11Z) - Automated Cybersecurity Compliance and Threat Response Using AI, Blockchain & Smart Contracts [0.36832029288386137]
We present a novel framework that integrates artificial intelligence (AI), blockchain, and smart contracts.
We propose a system that automates the enforcement of security policies, reducing manual effort and potential human error.
arXiv Detail & Related papers (2024-09-12T20:38:14Z) - SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems [5.055705635181593]
Embodied AI systems, including AI-powered robots that autonomously interact with the physical world, stand to be significantly advanced.
Improper safety management can lead to failures in complex environments and make the system vulnerable to malicious command injections.
We propose textitSafeEmbodAI, a safety framework for integrating mobile robots into embodied AI systems.
arXiv Detail & Related papers (2024-09-03T05:56:50Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - The MESA Security Model 2.0: A Dynamic Framework for Mitigating Stealth Data Exfiltration [0.0]
Stealth Data Exfiltration is a significant cyber threat characterized by covert infiltration, extended undetectability, and unauthorized dissemination of confidential data.
Our findings reveal that conventional defense-in-depth strategies often fall short in combating these sophisticated threats.
As we navigate this complex landscape, it is crucial to anticipate potential threats and continually update our defenses.
arXiv Detail & Related papers (2024-05-17T16:14:45Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.