Automated SELinux RBAC Policy Verification Using SMT
- URL: http://arxiv.org/abs/2312.04586v1
- Date: Mon, 4 Dec 2023 11:10:10 GMT
- Title: Automated SELinux RBAC Policy Verification Using SMT
- Authors: Divyam Pahuja, Alvin Tang, Klim Tsoutsman,
- Abstract summary: Security-Enhanced Linux (SE Linux) is a Linux kernel module that allows for a role-based access control mechanism.
We present a tool to automate the conversion of SE Linux policies into satisfiability modulo theories (SMT)
Our tool is capable of flagging common policy misconfigurations by asserting consistency between supplied RBAC policies and the intended specification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security-Enhanced Linux (SELinux) is a Linux kernel module that allows for a role-based access control (RBAC) mechanism. It provides a fine-grained security framework enabling system administrators to define security policies at the system and application level. Whilst SELinux offers robust security features through a customisable, powerful RBAC model, its manual policy management is prone to error, leaving the system vulnerable to accidental misconfigurations or loopholes. We present a tool to automate the conversion of SELinux policies into satisfiability modulo theories (SMT), enabling the verification of the intended security configurations using automated theorem proving. Our tool is capable of flagging common policy misconfigurations by asserting consistency between supplied RBAC policies and the intended specification by the user in SMT. RBAC policies are inherently complicated to verify entirely. We envision that the automated tool presented here can be further extended to identify an even broader range of policy misconfigurations, relieving the burden of managing convoluted policies on system administrators.
Related papers
- A Formal Model of Security Controls' Capabilities and Its Applications to Policy Refinement and Incident Management [0.2621730497733947]
This paper presents the Security Capability Model (SCM), a formal model that abstracts the features that security controls offer for enforcing security policies.
By validating its effectiveness in real-world scenarios, we show that SCM enables the automation of different and complex security tasks.
arXiv Detail & Related papers (2024-05-06T15:06:56Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Compositional Policy Learning in Stochastic Control Systems with Formal
Guarantees [0.0]
Reinforcement learning has shown promising results in learning neural network policies for complicated control tasks.
We propose a novel method for learning a composition of neural network policies in environments.
A formal certificate guarantees that a specification over the policy's behavior is satisfied with the desired probability.
arXiv Detail & Related papers (2023-12-03T17:04:18Z) - SoK: Access Control Policy Generation from High-level Natural Language
Requirements [1.3505077405741583]
Administrator-centered access control failures can cause data breaches, putting organizations at risk of financial loss and reputation damage.
Existing graphical policy configuration tools and automated policy generation frameworks attempt to help administrators configure and generate access control policies by avoiding such failures.
However, graphical policy configuration tools are prone to human errors, making them unusable.
On the other hand, automated policy generation frameworks are prone to erroneous predictions, making them unreliable.
arXiv Detail & Related papers (2023-10-05T03:45:20Z) - Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory [46.85103495283037]
We propose a new approach to apply verification methods from control theory to learned value functions.
We formalize original theorems that establish links between value functions and control barrier functions.
Our work marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
arXiv Detail & Related papers (2023-06-06T21:41:31Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Bounded Robustness in Reinforcement Learning via Lexicographic
Objectives [54.00072722686121]
Policy robustness in Reinforcement Learning may not be desirable at any cost.
We study how policies can be maximally robust to arbitrary observational noise.
We propose a robustness-inducing scheme, applicable to any policy algorithm, that trades off expected policy utility for robustness.
arXiv Detail & Related papers (2022-09-30T08:53:18Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - A System for Interactive Examination of Learned Security Policies [0.0]
We present a system for interactive examination of learned security policies.
It allows a user to traverse episodes of Markov decision processes in a controlled manner.
arXiv Detail & Related papers (2022-04-03T17:55:32Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Runtime-Safety-Guided Policy Repair [13.038017178545728]
We study the problem of policy repair for learning-based control policies in safety-critical settings.
We propose to reduce or even eliminate control switching by repairing' the trained policy based on runtime data produced by the safety controller.
arXiv Detail & Related papers (2020-08-17T23:31:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.