Can LLMs Make (Personalized) Access Control Decisions?
- URL: http://arxiv.org/abs/2511.20284v1
- Date: Tue, 25 Nov 2025 13:11:23 GMT
- Title: Can LLMs Make (Personalized) Access Control Decisions?
- Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun,
- Abstract summary: We propose to leverage the processing and reasoning capabilities of large language models to make dynamic, context-aware access control decisions.<n>We conducted a user study, which resulted in a dataset of 307 natural-language privacy statements and 14,682 access control decisions made by users.<n>Our results show that in general, LLMs can reflect users' preferences well, achieving up to 86% accuracy when compared to the decision made by the majority of users.
- Score: 2.854451361373021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise access control decisions are crucial to the security of both traditional applications and emerging agent-based systems. Typically, these decisions are made by users during app installation or at runtime. Due to the increasing complexity and automation of systems, making these access control decisions can add a significant cognitive load on users, often overloading them and leading to suboptimal or even arbitrary access control decisions. To address this problem, we propose to leverage the processing and reasoning capabilities of large language models (LLMs) to make dynamic, context-aware decisions aligned with the user's security preferences. For this purpose, we conducted a user study, which resulted in a dataset of 307 natural-language privacy statements and 14,682 access control decisions made by users. We then compare these decisions against those made by two versions of LLMs: a general and a personalized one, for which we also gathered user feedback on 1,446 of its decisions. Our results show that in general, LLMs can reflect users' preferences well, achieving up to 86\% accuracy when compared to the decision made by the majority of users. Our study also reveals a crucial trade-off in personalizing such a system: while providing user-specific privacy preferences to the LLM generally improves agreement with individual user decisions, adhering to those preferences can also violate some security best practices. Based on our findings, we discuss design and risk considerations for implementing a practical natural-language-based access control system that balances personalization, security, and utility.
Related papers
- Personalizing Agent Privacy Decisions via Logical Entailment [21.171501108831034]
We focus on personalizing language models' privacy decisions, grounding their judgments directly in prior user privacy decisions.<n>Our findings suggest that general privacy norms are insufficient for effective personalization of privacy decisions.<n>We propose ARIEL, a framework that jointly leverages a language model and rule-based logic for structured data-sharing reasoning.
arXiv Detail & Related papers (2025-12-04T18:24:56Z) - SteerX: Disentangled Steering for LLM Personalization [75.89038195784701]
Large language models (LLMs) have shown remarkable success in recent years, enabling a wide range of applications.<n>A critical factor in building such assistants is personalizing LLMs, as user preferences and needs vary widely.<n>We propose SteerX, a method that isolates preference-driven components from preference-agnostic components.
arXiv Detail & Related papers (2025-10-25T11:26:20Z) - Personalized Reasoning: Just-In-Time Personalization and Why LLMs Fail At It [81.50711040539566]
Current large language model (LLM) development treats task-solving and preference alignment as separate challenges.<n>We introduce PREFDISCO, an evaluation methodology that transforms static benchmarks into interactive personalization tasks.<n>Our framework creates scenarios where identical questions require different reasoning chains depending on user context.
arXiv Detail & Related papers (2025-09-30T18:55:28Z) - Implementing Rational Choice Functions with LLMs and Measuring their Alignment with User Preferences [15.72977233489024]
We put forward design principles for using large language models to implement rational choice functions.<n>We demonstrate the applicability of our approach through an empirical study in a practical application of an IUI in the automotive domain.
arXiv Detail & Related papers (2025-04-22T09:08:21Z) - iAgent: LLM Agent as a Shield between User and Recommender Systems [33.289547118795674]
recommender systems usually take the user-platform paradigm, where users are directly exposed under the control of the platform's recommendation algorithms.<n>We propose a new user-agent-platform paradigm, where agent serves as the protective shield between user and recommender system.
arXiv Detail & Related papers (2025-02-20T15:58:25Z) - SudoLM: Learning Access Control of Parametric Knowledge with Authorization Alignment [51.287157951953226]
We propose SudoLM, a framework that lets LLMs learn access control over specific parametric knowledge.<n> Experiments on two application scenarios demonstrate that SudoLM effectively controls the user's access to the parametric knowledge and maintains its general utility.
arXiv Detail & Related papers (2024-10-18T17:59:51Z) - Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making [85.24399869971236]
We aim to evaluate Large Language Models (LLMs) for embodied decision making.<n>Existing evaluations tend to rely solely on a final success rate.<n>We propose a generalized interface (Embodied Agent Interface) that supports the formalization of various types of tasks.
arXiv Detail & Related papers (2024-10-09T17:59:00Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Automating privacy decisions -- where to draw the line? [0.0]
Users are often overwhelmed by privacy decisions to manage their personal data.
We provide in this paper an overview of the main challenges raised by the automation of privacy decisions.
arXiv Detail & Related papers (2023-05-15T15:58:02Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.