LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs
- URL: http://arxiv.org/abs/2502.12460v1
- Date: Tue, 18 Feb 2025 02:45:46 GMT
- Title: LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs
- Authors: Pratik Sonune, Ritwik Rai, Shamik Sural, Vijayalakshmi Atluri, Ashish Kundu,
- Abstract summary: Rules or guidelines called Natural Language Access Control Policies (NLACPs) can't be directly used in a target access control model like Attribute-based Access Control (ABAC)<n> manually translating the NLACP rules into Machine Enforceable Security Policies (MESPs) is both time consuming and resource intensive.<n>We have developed a free web-based tool called LMN (LLMs for generating MESPs from NLACPs) that takes an NLACP as input and converts it into a corresponding MESP.
- Score: 0.435105239054559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations often lay down rules or guidelines called Natural Language Access Control Policies (NLACPs) for specifying who gets access to which information and when. However, these cannot be directly used in a target access control model like Attribute-based Access Control (ABAC). Manually translating the NLACP rules into Machine Enforceable Security Policies (MESPs) is both time consuming and resource intensive, rendering it infeasible especially for large organizations. Automated machine translation workflows, on the other hand, require information security officers to be adept at using such processes. To effectively address this problem, we have developed a free web-based publicly accessible tool called LMN (LLMs for generating MESPs from NLACPs) that takes an NLACP as input and converts it into a corresponding MESP. Internally, LMN uses the GPT 3.5 API calls and an appropriately chosen prompt. Extensive experiments with different prompts and performance metrics firmly establish the usefulness of LMN.
Related papers
- Porting an LLM based Application from ChatGPT to an On-Premise Environment [2.4742581572364126]
We study the porting process of a real-life application using ChatGPT to an on-premise environment.
The main considerations in the porting process include transparency of open source models and cost of hardware.
arXiv Detail & Related papers (2025-04-10T16:29:26Z) - MoRE-LLM: Mixture of Rule Experts Guided by a Large Language Model [54.14155564592936]
We propose a Mixture of Rule Experts guided by a Large Language Model (MoRE-LLM)
MoRE-LLM steers the discovery of local rule-based surrogates during training and their utilization for the classification task.
LLM is responsible for enhancing the domain knowledge alignment of the rules by correcting and contextualizing them.
arXiv Detail & Related papers (2025-03-26T11:09:21Z) - Synthesizing Access Control Policies using Large Language Models [0.5762345156477738]
Cloud compute systems allow administrators to write access control policies that govern access to private data.
While policies are written in convenient languages, such as AWS Identity and Access Management Policy Language, manually written policies often become complex and error prone.
In this paper, we investigate whether and how well Large Language Models (LLMs) can be used to synthesize access control policies.
arXiv Detail & Related papers (2025-03-14T16:40:25Z) - FlowAgent: Achieving Compliance and Flexibility for Workflow Agents [31.088578094151178]
FlowAgent is a novel agent framework designed to maintain both compliance and flexibility.
Building on PDL, we develop a comprehensive framework that empowers LLMs to manage OOW queries effectively.
We present a new evaluation methodology to rigorously assess an LLM agent's ability to handle OOW scenarios.
arXiv Detail & Related papers (2025-02-20T07:59:31Z) - Rule-ATT&CK Mapper (RAM): Mapping SIEM Rules to TTPs Using LLMs [22.791057694472634]
Rule-ATT&CK Mapper (RAM) is a framework that automates the mapping of structured SIEM rules to MITRE ATT&CK techniques.<n>RAM's multi-stage pipeline, which was inspired by the prompt chaining technique, enhances mapping accuracy without requiring LLM pre-training or fine-tuning.
arXiv Detail & Related papers (2025-02-04T14:16:02Z) - Learning to Ask: When LLM Agents Meet Unclear Instruction [55.65312637965779]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.
We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.
We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Open-domain Implicit Format Control for Large Language Model Generation [52.83173553689678]
We introduce a novel framework for controlled generation in large language models (LLMs)
This study investigates LLMs' capabilities to follow open-domain, one-shot constraints and replicate the format of the example answers.
We also develop a dataset collection methodology for supervised fine-tuning that enhances the open-domain format control of LLMs without degrading output quality.
arXiv Detail & Related papers (2024-08-08T11:51:45Z) - AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents [74.17623527375241]
We introduce a novel framework, called AutoGuide, which automatically generates context-aware guidelines from offline experiences.<n>As a result, our guidelines facilitate the provision of relevant knowledge for the agent's current decision-making process.<n>Our evaluation demonstrates that AutoGuide significantly outperforms competitive baselines in complex benchmark domains.
arXiv Detail & Related papers (2024-03-13T22:06:03Z) - Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication [79.79948834910579]
Natural language (NL) has long been the predominant format for human cognition and communication.
In this work, we challenge the default use of NL by exploring the utility of non-NL formats in different contexts.
arXiv Detail & Related papers (2024-02-28T16:07:54Z) - The potential of LLMs for coding with low-resource and domain-specific
programming languages [0.0]
This study focuses on the econometric scripting language named hansl of the open-source software gretl.
Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code.
arXiv Detail & Related papers (2023-07-24T17:17:13Z) - Augmented Large Language Models with Parametric Knowledge Guiding [72.71468058502228]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) with their impressive language understanding and generation capabilities.
Their performance may be suboptimal for domain-specific tasks that require specialized knowledge due to limited exposure to the related data.
We propose the novel Parametric Knowledge Guiding (PKG) framework, which equips LLMs with a knowledge-guiding module to access relevant knowledge.
arXiv Detail & Related papers (2023-05-08T15:05:16Z) - Augmented Language Models: a Survey [55.965967655575454]
This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools.
We refer to them as Augmented Language Models (ALMs)
The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks.
arXiv Detail & Related papers (2023-02-15T18:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.