A Formal Model of Security Controls' Capabilities and Its Applications to Policy Refinement and Incident Management
- URL: http://arxiv.org/abs/2405.03544v1
- Date: Mon, 6 May 2024 15:06:56 GMT
- Title: A Formal Model of Security Controls' Capabilities and Its Applications to Policy Refinement and Incident Management
- Authors: Cataldo Basile, Gabriele Gatti, Francesco Settanni,
- Abstract summary: This paper presents the Security Capability Model (SCM), a formal model that abstracts the features that security controls offer for enforcing security policies.
By validating its effectiveness in real-world scenarios, we show that SCM enables the automation of different and complex security tasks.
- Score: 0.2621730497733947
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Enforcing security requirements in networked information systems relies on security controls to mitigate the risks from increasingly dangerous threats. Configuring security controls is challenging; even nowadays, administrators must perform it without adequate tool support. Hence, this process is plagued by errors that translate to insecure postures, security incidents, and a lack of promptness in answering threats. This paper presents the Security Capability Model (SCM), a formal model that abstracts the features that security controls offer for enforcing security policies, which includes an Information Model that depicts the basic concepts related to rules (i.e., conditions, actions, events) and policies (i.e., conditions' evaluation, resolution strategies, default actions), and a Data Model that covers the capabilities needed to describe different types of filtering and channel protection controls. Following state-of-the-art design patterns, the model allows for generating abstract versions of the security controls' languages and a model-driven approach for translating abstract policies into device-specific configuration settings. By validating its effectiveness in real-world scenarios, we show that SCM enables the automation of different and complex security tasks, i.e., accurate and granular security control comparison, policy refinement, and incident response. Lastly, we present opportunities for extensions and integration with other frameworks and models.
Related papers
- Enhancing Security Control Production With Generative AI [2.869818284825133]
Security controls are mechanisms or policies designed for cloud based services to reduce risk, protect information, and ensure compliance with security regulations.
This paper explores the use of Generative AI to accelerate the generation of security controls.
By leveraging large language models and in-context learning, we propose a structured framework that reduces the time required for developing security controls from 2-3 days to less than one minute.
arXiv Detail & Related papers (2024-11-06T22:10:18Z) - Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements [46.79887158348167]
The current paradigm for safety alignment of large language models (LLMs) follows a one-size-fits-all approach.
We propose Controllable Safety Alignment (CoSA), a framework designed to adapt models to diverse safety requirements without re-training.
arXiv Detail & Related papers (2024-10-11T16:38:01Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory [46.85103495283037]
We propose a new approach to apply verification methods from control theory to learned value functions.
We formalize original theorems that establish links between value functions and control barrier functions.
Our work marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
arXiv Detail & Related papers (2023-06-06T21:41:31Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z) - Towards Safe Continuing Task Reinforcement Learning [21.390201009230246]
We propose an algorithm capable of operating in the continuing task setting without the need of restarts.
We evaluate our approach in a numerical example, which shows the capabilities of the proposed approach in learning safe policies via safe exploration.
arXiv Detail & Related papers (2021-02-24T22:12:25Z) - Runtime-Safety-Guided Policy Repair [13.038017178545728]
We study the problem of policy repair for learning-based control policies in safety-critical settings.
We propose to reduce or even eliminate control switching by repairing' the trained policy based on runtime data produced by the safety controller.
arXiv Detail & Related papers (2020-08-17T23:31:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.