Policy as Code, Policy as Type
- URL: http://arxiv.org/abs/2506.01446v1
- Date: Mon, 02 Jun 2025 09:04:48 GMT
- Title: Policy as Code, Policy as Type
- Authors: Matthew D. Fuchs,
- Abstract summary: We show how complex ABAC policies can be expressed as types in languages such as Agda and Lean.<n>We then go head-to-head with Rego, the popular and powerful open-source ABAC policy language.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Policies are designed to distinguish between correct and incorrect actions; they are types. But badly typed actions may cause not compile errors, but financial and reputational harm We demonstrate how even the most complex ABAC policies can be expressed as types in dependently typed languages such as Agda and Lean, providing a single framework to express, analyze, and implement policies. We then go head-to-head with Rego, the popular and powerful open-source ABAC policy language. We show the superior safety that comes with a powerful type system and built-in proof assistant. In passing, we discuss various access control models, sketch how to integrate in a future when attributes are distributed and signed (as discussed at the W3C), and show how policies can be communicated using just the syntax of the language. Our examples are in Agda.
Related papers
- ARPaCCino: An Agentic-RAG for Policy as Code Compliance [0.18472148461613155]
ARPaCCino is an agentic system that combines Large Language Models, Retrieval-Augmented-Generation, and tool-based validation.<n>It generates formal Rego rules, assesses IaC compliance, and iteratively refines the IaC configurations to ensure conformance.<n>Our results highlight the potential of agentic RAG architectures to enhance the automation, reliability, and accessibility of PaC.
arXiv Detail & Related papers (2025-07-11T12:36:33Z) - OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Modalities [54.152681077418805]
Current detection approaches are fallible, and are particularly susceptible to attacks that exploit mismatched generalizations of model capabilities.<n>We propose OMNIGUARD, an approach for detecting harmful prompts across languages and modalities.<n>Our approach improves harmful prompt classification accuracy by 11.57% over the strongest baseline in a multilingual setting.
arXiv Detail & Related papers (2025-05-29T05:25:27Z) - Say What You Mean: Natural Language Access Control with Large Language Models for Internet of Things [29.816322400339228]
We present LACE, a framework that bridges the gap between human intent and machine-enforceable logic.<n>It combines prompt-guided policy generation, retrieval-augmented reasoning, and formal validation to support expressive, interpretable, and verifiable access control.<n>LACE achieves 100% correctness in verified policy generation and up to 88% decision accuracy with 0.79 F1-score.
arXiv Detail & Related papers (2025-05-28T10:59:00Z) - Type-Constrained Code Generation with Language Models [51.03439021895432]
We introduce a type-constrained decoding approach that leverages type systems to guide code generation.<n>For this purpose, we develop novel prefix automata and a search over inhabitable types, forming a sound approach to enforce well-typedness on LLM-generated code.<n>Our approach reduces compilation errors by more than half and significantly increases functional correctness in code synthesis, translation, and repair tasks.
arXiv Detail & Related papers (2025-04-12T15:03:00Z) - Synthesizing Access Control Policies using Large Language Models [0.5762345156477738]
Cloud compute systems allow administrators to write access control policies that govern access to private data.<n>While policies are written in convenient languages, such as AWS Identity and Access Management Policy Language, manually written policies often become complex and error prone.<n>In this paper, we investigate whether and how well Large Language Models (LLMs) can be used to synthesize access control policies.
arXiv Detail & Related papers (2025-03-14T16:40:25Z) - Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.<n>We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.<n>We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Fundamental Risks in the Current Deployment of General-Purpose AI Models: What Have We (Not) Learnt From Cybersecurity? [60.629883024152576]
Large Language Models (LLMs) have seen rapid deployment in a wide range of use cases.<n>OpenAIs Altera are just a few examples of increased autonomy, data access, and execution capabilities.<n>These methods come with a range of cybersecurity challenges.
arXiv Detail & Related papers (2024-12-19T14:44:41Z) - On Policy Reuse: An Expressive Language for Representing and Executing General Policies that Call Other Policies [14.591568801450496]
A simple but powerful language has been introduced in terms of rules defined over a set of numerical features.
We consider three extensions to this language aimed at making policies and sketches more flexible and reusable.
The expressive power of the resulting language for policies and sketches is illustrated through a number of examples.
arXiv Detail & Related papers (2024-03-25T14:48:54Z) - Query-Based Adversarial Prompt Generation [72.06860443442429]
We build adversarial examples that cause an aligned language model to emit harmful strings.<n>We validate our attack on GPT-3.5 and OpenAI's safety classifier.
arXiv Detail & Related papers (2024-02-19T18:01:36Z) - ControlCap: Controllable Region-level Captioning [57.57406480228619]
Region-level captioning is challenged by the caption degeneration issue.
Pre-trained multimodal models tend to predict the most frequent captions but miss the less frequent ones.
We propose a controllable region-level captioning approach, which introduces control words to a multimodal model.
arXiv Detail & Related papers (2024-01-31T15:15:41Z) - Goal Representations for Instruction Following: A Semi-Supervised
Language Interface to Control [58.06223121654735]
We show a method that taps into joint image- and goal- conditioned policies with language using only a small amount of language data.
Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image.
We show instruction following across a variety of manipulation tasks in different scenes, with generalization to language instructions outside of the labeled data.
arXiv Detail & Related papers (2023-06-30T20:09:39Z) - Caption Anything: Interactive Image Description with Diverse Multimodal
Controls [14.628597750669275]
Controllable image captioning aims to describe the image with natural language following human purpose.
We present Caption AnyThing, a foundation model augmented image captioning framework.
Powered by Segment Anything Model (SAM) and ChatGPT, we unify the visual and language prompts into a modularized framework.
arXiv Detail & Related papers (2023-05-04T09:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.