LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs
- URL: http://arxiv.org/abs/2502.12460v1
- Date: Tue, 18 Feb 2025 02:45:46 GMT
- Title: LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs
- Authors: Pratik Sonune, Ritwik Rai, Shamik Sural, Vijayalakshmi Atluri, Ashish Kundu,
- Abstract summary: Rules or guidelines called Natural Language Access Control Policies (NLACPs) can't be directly used in a target access control model like Attribute-based Access Control (ABAC)<n> manually translating the NLACP rules into Machine Enforceable Security Policies (MESPs) is both time consuming and resource intensive.<n>We have developed a free web-based tool called LMN (LLMs for generating MESPs from NLACPs) that takes an NLACP as input and converts it into a corresponding MESP.
- Score: 0.435105239054559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations often lay down rules or guidelines called Natural Language Access Control Policies (NLACPs) for specifying who gets access to which information and when. However, these cannot be directly used in a target access control model like Attribute-based Access Control (ABAC). Manually translating the NLACP rules into Machine Enforceable Security Policies (MESPs) is both time consuming and resource intensive, rendering it infeasible especially for large organizations. Automated machine translation workflows, on the other hand, require information security officers to be adept at using such processes. To effectively address this problem, we have developed a free web-based publicly accessible tool called LMN (LLMs for generating MESPs from NLACPs) that takes an NLACP as input and converts it into a corresponding MESP. Internally, LMN uses the GPT 3.5 API calls and an appropriately chosen prompt. Extensive experiments with different prompts and performance metrics firmly establish the usefulness of LMN.
Related papers
- Agentic Privacy-Preserving Machine Learning [5.695349155812586]
Privacy-preserving machine learning (PPML) is critical to ensure data privacy in AI.<n>We propose a novel framework named Agentic-PPML to make PPML in LLMs practical.
arXiv Detail & Related papers (2025-07-30T08:20:45Z) - Beyond Syntax: Action Semantics Learning for App Agents [60.56331102288794]
Action Semantics Learning (ASL) is a learning framework where the learning objective is capturing the semantics of the ground truth actions.<n>ASL significantly improves the accuracy and generalisation of App agents over existing methods.
arXiv Detail & Related papers (2025-06-21T12:08:19Z) - How Good LLM-Generated Password Policies Are? [0.1747820331822631]
We study the application of Large Language Models within the context of Cybersecurity Access Control Systems.<n>Specifically, we investigate the consistency and accuracy of LLM-generated password policies, translating natural language prompts into executable pwquality.conf configuration files.<n>Our findings underscore significant challenges in the current generation of LLMs and contribute valuable insights into refining the deployment of LLMs in Access Control Systems.
arXiv Detail & Related papers (2025-06-10T01:12:31Z) - Permissioned LLMs: Enforcing Access Control in Large Language Models [14.935672762016972]
Permissioned LLMs (PerLM) superimpose organizational data access control structures on query responses.<n>PermLLM mechanisms build on Efficient Fine-Tuning to achieve the desired access control.<n>We demonstrate the efficacy of our PermLLM mechanisms through extensive experiments on four public datasets.
arXiv Detail & Related papers (2025-05-28T20:47:02Z) - Say What You Mean: Natural Language Access Control with Large Language Models for Internet of Things [29.816322400339228]
We present LACE, a framework that bridges the gap between human intent and machine-enforceable logic.<n>It combines prompt-guided policy generation, retrieval-augmented reasoning, and formal validation to support expressive, interpretable, and verifiable access control.<n>LACE achieves 100% correctness in verified policy generation and up to 88% decision accuracy with 0.79 F1-score.
arXiv Detail & Related papers (2025-05-28T10:59:00Z) - Porting an LLM based Application from ChatGPT to an On-Premise Environment [2.4742581572364126]
We study the porting process of a real-life application using ChatGPT to an on-premise environment.
The main considerations in the porting process include transparency of open source models and cost of hardware.
arXiv Detail & Related papers (2025-04-10T16:29:26Z) - MoRE-LLM: Mixture of Rule Experts Guided by a Large Language Model [54.14155564592936]
We propose a Mixture of Rule Experts guided by a Large Language Model (MoRE-LLM)
MoRE-LLM steers the discovery of local rule-based surrogates during training and their utilization for the classification task.
LLM is responsible for enhancing the domain knowledge alignment of the rules by correcting and contextualizing them.
arXiv Detail & Related papers (2025-03-26T11:09:21Z) - Defeating Prompt Injections by Design [79.00910871948787]
CaMeL is a robust defense that creates a protective system layer around the Large Language Models.<n>To operate, CaMeL explicitly extracts the control and data flows from the (trusted) query.<n>To further improve security, CaMeL uses a notion of a capability to prevent the exfiltration of private data over unauthorized data flows.
arXiv Detail & Related papers (2025-03-24T15:54:10Z) - Synthesizing Access Control Policies using Large Language Models [0.5762345156477738]
Cloud compute systems allow administrators to write access control policies that govern access to private data.
While policies are written in convenient languages, such as AWS Identity and Access Management Policy Language, manually written policies often become complex and error prone.
In this paper, we investigate whether and how well Large Language Models (LLMs) can be used to synthesize access control policies.
arXiv Detail & Related papers (2025-03-14T16:40:25Z) - FlowAgent: Achieving Compliance and Flexibility for Workflow Agents [31.088578094151178]
FlowAgent is a novel agent framework designed to maintain both compliance and flexibility.
Building on PDL, we develop a comprehensive framework that empowers LLMs to manage OOW queries effectively.
We present a new evaluation methodology to rigorously assess an LLM agent's ability to handle OOW scenarios.
arXiv Detail & Related papers (2025-02-20T07:59:31Z) - Rule-ATT&CK Mapper (RAM): Mapping SIEM Rules to TTPs Using LLMs [22.791057694472634]
Rule-ATT&CK Mapper (RAM) is a framework that automates the mapping of structured SIEM rules to MITRE ATT&CK techniques.<n>RAM's multi-stage pipeline, which was inspired by the prompt chaining technique, enhances mapping accuracy without requiring LLM pre-training or fine-tuning.
arXiv Detail & Related papers (2025-02-04T14:16:02Z) - Learning to Ask: When LLM Agents Meet Unclear Instruction [55.65312637965779]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.
We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.
We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Open-domain Implicit Format Control for Large Language Model Generation [52.83173553689678]
We introduce a novel framework for controlled generation in large language models (LLMs)
This study investigates LLMs' capabilities to follow open-domain, one-shot constraints and replicate the format of the example answers.
We also develop a dataset collection methodology for supervised fine-tuning that enhances the open-domain format control of LLMs without degrading output quality.
arXiv Detail & Related papers (2024-08-08T11:51:45Z) - AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents [74.17623527375241]
We introduce a novel framework, called AutoGuide, which automatically generates context-aware guidelines from offline experiences.<n>As a result, our guidelines facilitate the provision of relevant knowledge for the agent's current decision-making process.<n>Our evaluation demonstrates that AutoGuide significantly outperforms competitive baselines in complex benchmark domains.
arXiv Detail & Related papers (2024-03-13T22:06:03Z) - DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation [57.07295906718989]
Constrained decoding approaches aim to control the meaning or style of text generated by pre-trained large language (Ms also PLMs) for various tasks at inference time.<n>These methods often guide plausible continuations by greedily and explicitly selecting targets.<n>Inspired by cognitive dual-process theory, we propose a novel decoding framework DECIDER.
arXiv Detail & Related papers (2024-03-04T11:49:08Z) - Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication [79.79948834910579]
Natural language (NL) has long been the predominant format for human cognition and communication.
In this work, we challenge the default use of NL by exploring the utility of non-NL formats in different contexts.
arXiv Detail & Related papers (2024-02-28T16:07:54Z) - The potential of LLMs for coding with low-resource and domain-specific
programming languages [0.0]
This study focuses on the econometric scripting language named hansl of the open-source software gretl.
Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code.
arXiv Detail & Related papers (2023-07-24T17:17:13Z) - Augmented Large Language Models with Parametric Knowledge Guiding [72.71468058502228]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) with their impressive language understanding and generation capabilities.
Their performance may be suboptimal for domain-specific tasks that require specialized knowledge due to limited exposure to the related data.
We propose the novel Parametric Knowledge Guiding (PKG) framework, which equips LLMs with a knowledge-guiding module to access relevant knowledge.
arXiv Detail & Related papers (2023-05-08T15:05:16Z) - Augmented Language Models: a Survey [55.965967655575454]
This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools.
We refer to them as Augmented Language Models (ALMs)
The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks.
arXiv Detail & Related papers (2023-02-15T18:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.