Quantifying Policy Administration Cost in an Active Learning Framework
- URL: http://arxiv.org/abs/2401.00086v1
- Date: Fri, 29 Dec 2023 22:12:53 GMT
- Title: Quantifying Policy Administration Cost in an Active Learning Framework
- Authors: Si Zhang and Philip W. L. Fong
- Abstract summary: This paper proposes a computational model for policy administration.
A well-designed access control model must anticipate such changes so that the administration cost does not become prohibitive when the organization scales up.
- Score: 4.106460421493345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a computational model for policy administration. As an
organization evolves, new users and resources are gradually placed under the
mediation of the access control model. Each time such new entities are added,
the policy administrator must deliberate on how the access control policy shall
be revised to reflect the new reality. A well-designed access control model
must anticipate such changes so that the administration cost does not become
prohibitive when the organization scales up. Unfortunately, past Access Control
research does not offer a formal way to quantify the cost of policy
administration. In this work, we propose to model ongoing policy administration
in an active learning framework. Administration cost can be quantified in terms
of query complexity. We demonstrate the utility of this approach by applying it
to the evolution of protection domains. We also modelled different policy
administration strategies in our framework. This allowed us to formally
demonstrate that domain-based policies have a cost advantage over access
control matrices because of the use of heuristic reasoning when the policy
evolves. To the best of our knowledge, this is the first work to employ an
active learning framework to study the cost of policy deliberation and
demonstrate the cost advantage of heuristic policy administration.
Related papers
- IOB: Integrating Optimization Transfer and Behavior Transfer for
Multi-Policy Reuse [50.90781542323258]
Reinforcement learning (RL) agents can transfer knowledge from source policies to a related target task.
Previous methods introduce additional components, such as hierarchical policies or estimations of source policies' value functions.
We propose a novel transfer RL method that selects the source policy without training extra components.
arXiv Detail & Related papers (2023-08-14T09:22:35Z) - Enabling Efficient, Reliable Real-World Reinforcement Learning with
Approximate Physics-Based Models [10.472792899267365]
We focus on developing efficient and reliable policy optimization strategies for robot learning with real-world data.
In this paper we introduce a novel policy gradient-based policy optimization framework.
We show that our approach can learn precise control strategies reliably and with only minutes of real-world data.
arXiv Detail & Related papers (2023-07-16T22:36:36Z) - Constructing a Good Behavior Basis for Transfer using Generalized Policy
Updates [63.58053355357644]
We study the problem of learning a good set of policies, so that when combined together, they can solve a wide variety of unseen reinforcement learning tasks.
We show theoretically that having access to a specific set of diverse policies, which we call a set of independent policies, can allow for instantaneously achieving high-level performance.
arXiv Detail & Related papers (2021-12-30T12:20:46Z) - Policy Search for Model Predictive Control with Application to Agile
Drone Flight [56.24908013905407]
We propose a policy-search-for-model-predictive-control framework for MPC.
Specifically, we formulate the MPC as a parameterized controller, where the hard-to-optimize decision variables are represented as high-level policies.
Experiments show that our controller achieves robust and real-time control performance in both simulation and the real world.
arXiv Detail & Related papers (2021-12-07T17:39:24Z) - PAMMELA: Policy Administration Methodology using Machine Learning [1.1744028458220428]
PAMMELA is a policy administration methodology using Machine Learning.
It generates a new policy by learning the rules of a policy currently enforced in a similar organization.
For policy augmentation, PAMMELA can infer new rules based on the knowledge gathered from the existing rules.
arXiv Detail & Related papers (2021-11-13T07:05:22Z) - Goal-Conditioned Reinforcement Learning with Imagined Subgoals [89.67840168694259]
We propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks.
Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic.
We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.
arXiv Detail & Related papers (2021-07-01T15:30:59Z) - Reinforcement Learning [36.664136621546575]
Reinforcement learning (RL) is a general framework for adaptive control, which has proven to be efficient in many domains.
In this chapter, we present the basic framework of RL and recall the two main families of approaches that have been developed to learn a good policy.
arXiv Detail & Related papers (2020-05-29T06:53:29Z) - An Automatic Attribute Based Access Control Policy Extraction from
Access Logs [5.142415132534397]
An attribute-based access control (ABAC) model provides a more flexible approach for addressing the authorization needs of complex and dynamic systems.
We present a methodology for automatically learning ABAC policy rules from access logs of a system to simplify the policy development process.
arXiv Detail & Related papers (2020-03-16T15:08:54Z) - Policy Evaluation Networks [50.53250641051648]
We introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding.
Our empirical results demonstrate that combining these three elements can produce policies that outperform those that generated the training data.
arXiv Detail & Related papers (2020-02-26T23:00:27Z) - Efficient Deep Reinforcement Learning via Adaptive Policy Transfer [50.51637231309424]
Policy Transfer Framework (PTF) is proposed to accelerate Reinforcement Learning (RL)
Our framework learns when and which source policy is the best to reuse for the target policy and when to terminate it.
Experimental results show it significantly accelerates the learning process and surpasses state-of-the-art policy transfer methods.
arXiv Detail & Related papers (2020-02-19T07:30:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.