Sustainable Adaptive Security
- URL: http://arxiv.org/abs/2306.04481v1
- Date: Mon, 5 Jun 2023 08:48:36 GMT
- Title: Sustainable Adaptive Security
- Authors: Liliana Pasquale, Kushal Ramkumar, Wanling Cai, John McCarthy, Gavin
Doherty, and Bashar Nuseibeh
- Abstract summary: We propose the notion of Sustainable Adaptive Security (SAS) which reflects enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats.
We use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security.
- Score: 11.574868434725117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With software systems permeating our lives, we are entitled to expect that
such systems are secure by design, and that such security endures throughout
the use of these systems and their subsequent evolution. Although adaptive
security systems have been proposed to continuously protect assets from harm,
they can only mitigate threats arising from changes foreseen at design time. In
this paper, we propose the notion of Sustainable Adaptive Security (SAS) which
reflects such enduring protection by augmenting adaptive security systems with
the capability of mitigating newly discovered threats. To achieve this
objective, a SAS system should be designed by combining automation (e.g., to
discover and mitigate security threats) and human intervention (e.g., to
resolve uncertainties during threat discovery and mitigation). In this paper,
we use a smart home example to showcase how we can engineer the activities of
the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems
satisfying sustainable adaptive security. We suggest that using anomaly
detection together with abductive reasoning can help discover new threats and
guide the evolution of security requirements and controls. We also exemplify
situations when humans can be involved in the execution of the activities of
the MAPE loop and discuss the requirements to engineer human interventions.
Related papers
- AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection [47.83354878065321]
We propose AGrail, a lifelong guardrail to enhance agent safety.
AGrail features adaptive safety check generation, effective safety check optimization, and tool compatibility and flexibility.
arXiv Detail & Related papers (2025-02-17T05:12:33Z) - Dynamic safety cases for frontier AI [0.7538606213726908]
This paper proposes a Dynamic Safety Case Management System (DSCMS) to support both the initial creation of a safety case and its systematic, semi-automated revision over time.
We demonstrate this approach on a safety case template for offensive cyber capabilities and suggest ways it can be integrated into governance structures for safety-critical decision-making.
arXiv Detail & Related papers (2024-12-23T14:43:41Z) - ACTISM: Threat-informed Dynamic Security Modelling for Automotive Systems [7.3347982474177185]
ACTISM (Automotive Consequence-Driven and Threat-Informed Security Modelling) is an integrated security modelling framework.
It enhances the resilience of automotive systems by dynamically updating their cybersecurity posture.
We demonstrate the effectiveness of ACTISM by applying it to a real-world example of the Tesla Electric Vehicle's In-Vehicle Infotainment system.
We report the results of a practitioners' survey on the usefulness of ACTISM and its future directions.
arXiv Detail & Related papers (2024-11-30T09:58:48Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Automated Cybersecurity Compliance and Threat Response Using AI, Blockchain & Smart Contracts [0.36832029288386137]
We present a novel framework that integrates artificial intelligence (AI), blockchain, and smart contracts.
We propose a system that automates the enforcement of security policies, reducing manual effort and potential human error.
arXiv Detail & Related papers (2024-09-12T20:38:14Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - The MESA Security Model 2.0: A Dynamic Framework for Mitigating Stealth Data Exfiltration [0.0]
Stealth Data Exfiltration is a significant cyber threat characterized by covert infiltration, extended undetectability, and unauthorized dissemination of confidential data.
Our findings reveal that conventional defense-in-depth strategies often fall short in combating these sophisticated threats.
As we navigate this complex landscape, it is crucial to anticipate potential threats and continually update our defenses.
arXiv Detail & Related papers (2024-05-17T16:14:45Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.