Role-Aware Language Models for Secure and Contextualized Access Control in Organizations
- URL: http://arxiv.org/abs/2507.23465v1
- Date: Thu, 31 Jul 2025 11:41:04 GMT
- Title: Role-Aware Language Models for Secure and Contextualized Access Control in Organizations
- Authors: Saeed Almheiri, Yerulan Kongrat, Adrian Santosh, Ruslan Tasmukhanov, Josemaria Vera, Muhammad Dehan Al Kautsar, Fajri Koto,
- Abstract summary: Large language models (LLMs) are increasingly deployed in enterprise settings.<n>We investigate whether LLMs can be fine-tuned to generate responses that reflect the access privileges associated with different organizational roles.
- Score: 4.122315998598296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As large language models (LLMs) are increasingly deployed in enterprise settings, controlling model behavior based on user roles becomes an essential requirement. Existing safety methods typically assume uniform access and focus on preventing harmful or toxic outputs, without addressing role-specific access constraints. In this work, we investigate whether LLMs can be fine-tuned to generate responses that reflect the access privileges associated with different organizational roles. We explore three modeling strategies: a BERT-based classifier, an LLM-based classifier, and role-conditioned generation. To evaluate these approaches, we construct two complementary datasets. The first is adapted from existing instruction-tuning corpora through clustering and role labeling, while the second is synthetically generated to reflect realistic, role-sensitive enterprise scenarios. We assess model performance across varying organizational structures and analyze robustness to prompt injection, role mismatch, and jailbreak attempts.
Related papers
- PRvL: Quantifying the Capabilities and Risks of Large Language Models for PII Redaction [0.7421845364041001]
Redaction of Personally Identifiable Information (PII) from unstructured text is critical for ensuring data privacy in regulated domains.<n>Recent advances in Large Language Models (LLMs) offer a promising alternative.<n>We present a comprehensive analysis of LLMs as privacy-preserving PII Redaction systems.<n>We release PRvL, an open-source suite of fine-tuned models, and evaluation tools for general-purpose PII Redaction.
arXiv Detail & Related papers (2025-08-07T16:22:49Z) - OrgAccess: A Benchmark for Role Based Access Control in Organization Scale LLMs [7.999158988904784]
Large Language Models (LLMs) serve as unified knowledge repositories and intelligent assistants in enterprise settings.<n> evaluating this crucial capability is inherently difficult due to the proprietary and sensitive nature of real-world corporate data and access control policies.<n>We introduce a synthetic yet representative textbfOrgAccess benchmark consisting of 40 distinct types of permissions commonly relevant across different organizational roles and levels.
arXiv Detail & Related papers (2025-05-25T14:30:15Z) - Single LLM, Multiple Roles: A Unified Retrieval-Augmented Generation Framework Using Role-Specific Token Optimization [64.33914369424494]
RoleRAG is a unified RAG framework that achieves efficient multi-task processing through role-specific token optimization.<n>RoleRAG comprises six modules, each handling a specific sub-task within the RAG process.<n>We introduce a query graph to represent the decomposition of the query, which can be dynamically resolved according to the decomposing state.
arXiv Detail & Related papers (2025-05-21T12:25:12Z) - Say It Another Way: Auditing LLMs with a User-Grounded Automated Paraphrasing Framework [9.162876771766513]
We introduce AUGMENT, a framework for generating controlled, realistic prompt paraphrases based on linguistic structure and user demographics.<n>AUGMENT ensures paraphrase quality through a combination of semantic, stylistic, and instruction-following criteria.<n>Our findings highlight the need for more representative and structured approaches to prompt variation in large language models.
arXiv Detail & Related papers (2025-05-06T14:17:30Z) - New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration [49.180693704510006]
Referring Expression (REC) is a cross-modal task that evaluates the interplay of language understanding, image comprehension, and language-to-image grounding.<n>It serves as an essential testing ground for Multimodal Large Language Models (MLLMs)
arXiv Detail & Related papers (2025-02-27T13:58:44Z) - A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition [71.61103962200666]
Zero-shot named entity recognition (NER) aims to develop entity recognition systems from unannotated text corpora.<n>Recent work has adapted large language models (LLMs) for zero-shot NER by crafting specialized prompt templates.<n>We introduce the cooperative multi-agent system (CMAS), a novel framework for zero-shot NER.
arXiv Detail & Related papers (2025-02-25T23:30:43Z) - RNR: Teaching Large Language Models to Follow Roles and Rules [153.6596303205894]
We propose model, an automated data generation pipeline that generates diverse roles and rules from existing IFT instructions.
This data can then be used to train models that follow complex system prompts.
Our framework significantly improves role and rule following capability in large language models.
arXiv Detail & Related papers (2024-09-10T06:07:32Z) - Enhancing Role-playing Systems through Aggressive Queries: Evaluation and Improvement [17.5855800570993]
Large Language Models (LLMs) have propelled dialogue generation into new realms, particularly in the field of role-playing systems (RPSs)
Existing LLM-based RPSs still struggle to align with roles when handling intricate and trapped queries in boundary scenarios.
We design the Modular ORchestrated Trap-setting Interaction SystEm (MORTISE) to benchmark and improve the role-playing LLMs' performance.
arXiv Detail & Related papers (2024-02-16T12:12:05Z) - RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models [107.00832724504752]
We introduce RoleLLM, a framework to benchmark, elicit, and enhance role-playing abilities in Large Language Models (LLMs)
By Context-Instruct and RoleGPT, we create RoleBench, the first systematic and fine-grained character-level benchmark dataset for role-playing with 168,093 samples.
arXiv Detail & Related papers (2023-10-01T17:52:59Z) - Intuitive or Dependent? Investigating LLMs' Behavior Style to
Conflicting Prompts [9.399159332152013]
This study investigates the behaviors of Large Language Models (LLMs) when faced with conflicting prompts versus their internal memory.
This will help to understand LLMs' decision mechanism and also benefit real-world applications, such as retrieval-augmented generation (RAG)
arXiv Detail & Related papers (2023-09-29T17:26:03Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - RODE: Learning Roles to Decompose Multi-Agent Tasks [69.56458960841165]
Role-based learning holds the promise of achieving scalable multi-agent learning by decomposing complex tasks using roles.
We propose to first decompose joint action spaces into restricted role action spaces by clustering actions according to their effects on the environment and other agents.
By virtue of these advances, our method outperforms the current state-of-the-art MARL algorithms on 10 of the 14 scenarios that comprise the challenging StarCraft II micromanagement benchmark.
arXiv Detail & Related papers (2020-10-04T09:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.