Risk-Aware Fine-Grained Access Control in Cyber-Physical Contexts
- URL: http://arxiv.org/abs/2108.12739v1
- Date: Sun, 29 Aug 2021 03:38:45 GMT
- Title: Risk-Aware Fine-Grained Access Control in Cyber-Physical Contexts
- Authors: Jinxin Liu, Murat Simsek, Burak Kantarci, Melike Erol-Kantarci, Andrew
Malton, Andrew Walenstein
- Abstract summary: RASA is a context-sensitive access authorization approach and mechanism leveraging unsupervised machine learning to automatically infer risk-based authorization decision boundaries.
We explore RASA in a healthcare usage environment, wherein cyber and physical conditions create context-specific risks for protecting private health information.
- Score: 12.138525287184061
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Access to resources by users may need to be granted only upon certain
conditions and contexts, perhaps particularly in cyber-physical settings.
Unfortunately, creating and modifying context-sensitive access control
solutions in dynamic environments creates ongoing challenges to manage the
authorization contexts. This paper proposes RASA, a context-sensitive access
authorization approach and mechanism leveraging unsupervised machine learning
to automatically infer risk-based authorization decision boundaries. We explore
RASA in a healthcare usage environment, wherein cyber and physical conditions
create context-specific risks for protecting private health information. The
risk levels are associated with access control decisions recommended by a
security policy. A coupling method is introduced to track coexistence of the
objects within context using frequency and duration of coexistence, and these
are clustered to reveal sets of actions with common risk levels; these are used
to create authorization decision boundaries. In addition, we propose a method
for assessing the risk level and labelling the clusters with respect to their
corresponding risk levels. We evaluate the promise of RASA-generated policies
against a heuristic rule-based policy. By employing three different coupling
features (frequency-based, duration-based, and combined features), the
decisions of the unsupervised method and that of the policy are more than 99%
consistent.
Related papers
- Contextual Safety Reasoning and Grounding for Open-World Robots [79.98924225712668]
CORE is a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment.<n>We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty.<n>We demonstrate through simulation and real-world experiments that CORE enforces contextually appropriate behavior in unseen environments.
arXiv Detail & Related papers (2026-02-23T15:51:23Z) - Beyond Static Alignment: Hierarchical Policy Control for LLM Safety via Risk-Aware Chain-of-Thought [5.251527748612469]
Large Language Models (LLMs) face a fundamental safety-helpfulness trade-off due to static, one-size-fits-all safety policies.<n>We present textbfPACT (Prompt-Thought Action via Chain-of-Thought), a framework for dynamic safety control through explicit, risk-aware reasoning.
arXiv Detail & Related papers (2026-02-06T12:20:01Z) - An Ontology-Based Approach to Security Risk Identification of Container Deployments in OT Contexts [1.826848871278733]
Security risk identification for OT container deployments is challenged by hybrid IT/OT architectures, fragmented stakeholder knowledge, and continuous system changes.<n>We propose a model-based approach, implemented as the Container Security Risk Ontology (CSRO)<n>CSRO integrates five key domains: adversarial behaviour, contextual assumptions, attack scenarios, risk assessment rules, and container security artefacts.
arXiv Detail & Related papers (2026-01-07T15:20:19Z) - Lost in Vagueness: Towards Context-Sensitive Standards for Robustness Assessment under the EU AI Act [2.740981829798319]
Robustness is a key requirement for high-risk AI systems under the EU Artificial Intelligence Act (AI Act)<n>This paper investigates what it means for AI systems to be robust and illustrates the need for context-sensitive standardisation.
arXiv Detail & Related papers (2025-11-19T17:06:36Z) - Uncertainty-Aware, Risk-Adaptive Access Control for Agentic Systems using an LLM-Judged TBAC Model [11.50995963023462]
This paper introduces an advanced security framework that extends the Task-Based Access Control (TBAC) model by using a Large Language Model (LLM) as an autonomous, risk-aware judge.<n>This model makes access control decisions not only based on an agent's intent but also by explicitly considering the inherent textbfrisk associated with target resources.
arXiv Detail & Related papers (2025-10-13T13:52:33Z) - RADAR: A Risk-Aware Dynamic Multi-Agent Framework for LLM Safety Evaluation via Role-Specialized Collaboration [81.38705556267917]
Existing safety evaluation methods for large language models (LLMs) suffer from inherent limitations.<n>We introduce a theoretical framework that reconstructs the underlying risk concept space.<n>We propose RADAR, a multi-agent collaborative evaluation framework.
arXiv Detail & Related papers (2025-09-28T09:35:32Z) - Towards Safety and Security Testing of Cyberphysical Power Systems by Shape Validation [42.350737545269105]
complexity of cyberphysical power systems leads to larger attack surfaces to be exploited by malicious actors.<n>We propose to meet those risks with a declarative approach to describe cyber power systems and automatically evaluate security and safety controls.
arXiv Detail & Related papers (2025-06-14T12:07:44Z) - DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents [52.92354372596197]
Large Language Models (LLMs) are increasingly central to agentic systems due to their strong reasoning and planning capabilities.<n>This interaction also introduces the risk of prompt injection attacks, where malicious inputs from external sources can mislead the agent's behavior.<n>We propose a Dynamic Rule-based Isolation Framework for Trustworthy agentic systems, which enforces both control and data-level constraints.
arXiv Detail & Related papers (2025-06-13T05:01:09Z) - Learning Deterministic Policies with Policy Gradients in Constrained Markov Decision Processes [59.27926064817273]
We introduce an exploration-agnostic algorithm, called C-PG, which enjoys global last-iterate convergence guarantees under domination assumptions.<n>We empirically validate both the action-based (C-PGAE) and parameter-based (C-PGPE) variants of C-PG on constrained control tasks.
arXiv Detail & Related papers (2025-06-06T10:29:05Z) - SPoRt -- Safe Policy Ratio: Certified Training and Deployment of Task Policies in Model-Free RL [54.022106606140774]
We present theoretical results that provide a bound on the probability of violating a safety property for a new task-specific policy in a model-free, episodic setup.
We also present SPoRt, which enables the user to trade off safety guarantees in exchange for task-specific performance.
arXiv Detail & Related papers (2025-04-08T19:09:07Z) - Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping Multi-Agent Study [0.4166512373146748]
We investigate the Free Energy Principle as a foundation for measuring risk in agentic and multi-agent systems.
We introduce a Cumulative Risk Exposure metric that is flexible to differing contexts and needs.
We show that the introduction of gatekeepers in an AV fleet, even at low penetration, can generate significant positive externalities in terms of increased system safety.
arXiv Detail & Related papers (2025-02-06T17:38:45Z) - Trustworthy AI: Securing Sensitive Data in Large Language Models [0.0]
Large Language Models (LLMs) have transformed natural language processing (NLP) by enabling robust text generation and understanding.
This paper proposes a comprehensive framework for embedding trust mechanisms into LLMs to dynamically control the disclosure of sensitive information.
arXiv Detail & Related papers (2024-09-26T19:02:33Z) - Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning [62.81324245896717]
We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-ite convergence guarantees under (weak) gradient domination assumptions.
We numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines.
arXiv Detail & Related papers (2024-07-15T14:54:57Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Safety-Constrained Policy Transfer with Successor Features [19.754549649781644]
We propose a Constrained Markov Decision Process (CMDP) formulation that enables the transfer of policies and adherence to safety constraints.
Our approach relies on a novel extension of generalized policy improvement to constrained settings via a Lagrangian formulation.
Our experiments in simulated domains show that our approach is effective; it visits unsafe states less frequently and outperforms alternative state-of-the-art methods when taking safety constraints into account.
arXiv Detail & Related papers (2022-11-10T06:06:36Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Constrained Policy Optimization for Controlled Self-Learning in
Conversational AI Systems [18.546197100318693]
We introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints.
We present a novel meta-gradient learning approach that is scalable and practical to address this problem.
We conduct extensive experiments using data from a real-world conversational AI on a set of realistic constraint benchmarks.
arXiv Detail & Related papers (2022-09-17T23:44:13Z) - Inference and dynamic decision-making for deteriorating systems with
probabilistic dependencies through Bayesian networks and deep reinforcement
learning [0.0]
We propose an efficient algorithmic framework for inference and decision-making under uncertainty for engineering systems exposed to deteriorating environments.
In terms of policy optimization, we adopt a deep decentralized multi-agent actor-critic (DDMAC) reinforcement learning approach.
Results demonstrate that DDMAC policies offer substantial benefits when compared to state-of-the-art approaches.
arXiv Detail & Related papers (2022-09-02T14:45:40Z) - Sample-Based Bounds for Coherent Risk Measures: Applications to Policy
Synthesis and Verification [32.9142708692264]
This paper aims to address a few problems regarding risk-aware verification and policy synthesis.
First, we develop a sample-based method to evaluate a subset of a random variable distribution.
Second, we develop a robotic-based method to determine solutions to problems that outperform a large fraction of the decision space.
arXiv Detail & Related papers (2022-04-21T01:06:10Z) - Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety
Constraints in Finite MDPs [71.47895794305883]
We study the problem of Safe Policy Improvement (SPI) under constraints in the offline Reinforcement Learning setting.
We present an SPI for this RL setting that takes into account the preferences of the algorithm's user for handling the trade-offs for different reward signals.
arXiv Detail & Related papers (2021-05-31T21:04:21Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.