Harpocrates: A Statically Typed Privacy Conscious Programming Framework
- URL: http://arxiv.org/abs/2411.06317v2
- Date: Wed, 20 Nov 2024 16:02:55 GMT
- Title: Harpocrates: A Statically Typed Privacy Conscious Programming Framework
- Authors: Sinan Pehlivanoglu, Malte Schwarzkopf,
- Abstract summary: Harpocrates eliminates raw data for a policy protected type from the application, ensuring it can only exist in protected form.
Harpocras allows the data to flow freely throughout the application, inside the policy membranes but enforces the policies when the data is tried to be accessed, mutated, declassified or passed through the application boundary.
- Score: 1.4732811715354455
- License:
- Abstract: In this paper, we introduce Harpocrates, a compiler plugin and a framework pair for Scala that binds the privacy policies to the data during data creation in form of oblivious membranes. Harpocrates eliminates raw data for a policy protected type from the application, ensuring it can only exist in protected form and centralizes the policy checking to the policy declaration site, making the privacy logic easy to maintain and verify. Instead of approaching privacy from an information flow verification perspective, Harpocrates allow the data to flow freely throughout the application, inside the policy membranes but enforces the policies when the data is tried to be accessed, mutated, declassified or passed through the application boundary. The centralization of the policies allow the maintainers to change the enforced logic simply by updating a single function while keeping the rest of the application oblivious to the change. Especially in a setting where the data definition is shared by multiple applications, the publisher can update the policies without requiring the dependent applications to make any changes beyond updating the dependency version.
Related papers
- Extracting Database Access-control Policies From Web Applications [5.193592261722995]
It is difficult to divine what policy is embedded in application code and what data the application may access.
This paper tackles policy extraction: the task of extracting the access-control policy.
Ote is a policy extractor for Ruby-on-Rails web applications.
arXiv Detail & Related papers (2024-11-18T08:58:11Z) - PolicyLR: A Logic Representation For Privacy Policies [34.73520882451813]
We propose PolicyLR, a new paradigm that offers a comprehensive machine-readable representation of privacy policies.
PolicyLR converts privacy policies into a machine-readable format using valuations of atomic formulae.
We demonstrate PolicyLR in three privacy tasks: Policy Compliance, Inconsistency Detection and Privacy Comparison Shopping.
arXiv Detail & Related papers (2024-08-27T07:27:16Z) - The Privacy Policy Permission Model: A Unified View of Privacy Policies [0.5371337604556311]
A privacy policy is a set of statements that specifies how an organization gathers, uses, discloses, and maintains a client's data.
Most privacy policies lack a clear, complete explanation of how data providers' information is used.
We propose a modeling methodology, called the Privacy Policy Permission Model (PPPM), that provides a uniform, easy-to-understand representation of privacy policies.
arXiv Detail & Related papers (2024-03-26T06:12:38Z) - Conformal Off-Policy Evaluation in Markov Decision Processes [53.786439742572995]
Reinforcement Learning aims at identifying and evaluating efficient control policies from data.
Most methods for this learning task, referred to as Off-Policy Evaluation (OPE), do not come with accuracy and certainty guarantees.
We present a novel OPE method based on Conformal Prediction that outputs an interval containing the true reward of the target policy with a prescribed level of certainty.
arXiv Detail & Related papers (2023-04-05T16:45:11Z) - Exploring Consequences of Privacy Policies with Narrative Generation via
Answer Set Programming [0.0]
We present a framework that uses Answer Set Programming (ASP) to formalize privacy policies.
ASP allows end-users to forward-simulate possible consequences of the policy in terms of actors.
We demonstrate through the example of the Health Insurance Portability and Accountability Act how to use the system in various ways.
arXiv Detail & Related papers (2022-12-13T16:44:46Z) - Offline RL With Realistic Datasets: Heteroskedasticity and Support
Constraints [82.43359506154117]
We show that typical offline reinforcement learning methods fail to learn from data with non-uniform variability.
Our method is simple, theoretically motivated, and improves performance across a wide range of offline RL problems in Atari games, navigation, and pixel-based manipulation.
arXiv Detail & Related papers (2022-11-02T11:36:06Z) - Memory-Constrained Policy Optimization [59.63021433336966]
We introduce a new constrained optimization method for policy gradient reinforcement learning.
We form a second trust region through the construction of another virtual policy that represents a wide range of past policies.
We then enforce the new policy to stay closer to the virtual policy, which is beneficial in case the old policy performs badly.
arXiv Detail & Related papers (2022-04-20T08:50:23Z) - Detecting Compliance of Privacy Policies with Data Protection Laws [0.0]
Privacy policies are often written in extensive legal jargon that is difficult to understand.
We aim to bridge that gap by providing a framework that analyzes privacy policies in light of various data protection laws.
By using such a tool, users would be better equipped to understand how their personal data is managed.
arXiv Detail & Related papers (2021-02-21T09:15:15Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - Policy Evaluation Networks [50.53250641051648]
We introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding.
Our empirical results demonstrate that combining these three elements can produce policies that outperform those that generated the training data.
arXiv Detail & Related papers (2020-02-26T23:00:27Z) - Preventing Imitation Learning with Adversarial Policy Ensembles [79.81807680370677]
Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy.
How can we protect against external observers cloning our proprietary policies?
We introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies.
arXiv Detail & Related papers (2020-01-31T01:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.