Exploring Consequences of Privacy Policies with Narrative Generation via
Answer Set Programming
- URL: http://arxiv.org/abs/2212.06719v1
- Date: Tue, 13 Dec 2022 16:44:46 GMT
- Title: Exploring Consequences of Privacy Policies with Narrative Generation via
Answer Set Programming
- Authors: Chinmaya Dabral, Emma Tosch, Chris Martens
- Abstract summary: We present a framework that uses Answer Set Programming (ASP) to formalize privacy policies.
ASP allows end-users to forward-simulate possible consequences of the policy in terms of actors.
We demonstrate through the example of the Health Insurance Portability and Accountability Act how to use the system in various ways.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Informed consent has become increasingly salient for data privacy and its
regulation. Entities from governments to for-profit companies have addressed
concerns about data privacy with policies that enumerate the conditions for
personal data storage and transfer. However, increased enumeration of and
transparency in data privacy policies has not improved end-users' comprehension
of how their data might be used: not only are privacy policies written in legal
language that users may struggle to understand, but elements of these policies
may compose in such a way that the consequences of the policy are not
immediately apparent.
We present a framework that uses Answer Set Programming (ASP) -- a type of
logic programming -- to formalize privacy policies. Privacy policies thus
become constraints on a narrative planning space, allowing end-users to
forward-simulate possible consequences of the policy in terms of actors having
roles and taking actions in a domain. We demonstrate through the example of the
Health Insurance Portability and Accountability Act (HIPAA) how to use the
system in various ways, including asking questions about possibilities and
identifying which clauses of the law are broken by a given sequence of events.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - PrivComp-KG : Leveraging Knowledge Graph and Large Language Models for Privacy Policy Compliance Verification [0.0]
We propose a Large Language Model (LLM) and Semantic Web based approach for privacy compliance.
PrivComp-KG is designed to efficiently store and retrieve comprehensive information concerning privacy policies.
It can be queried to check for compliance with privacy policies by each vendor against relevant policy regulations.
arXiv Detail & Related papers (2024-04-30T17:44:44Z) - The Privacy Policy Permission Model: A Unified View of Privacy Policies [0.5371337604556311]
A privacy policy is a set of statements that specifies how an organization gathers, uses, discloses, and maintains a client's data.
Most privacy policies lack a clear, complete explanation of how data providers' information is used.
We propose a modeling methodology, called the Privacy Policy Permission Model (PPPM), that provides a uniform, easy-to-understand representation of privacy policies.
arXiv Detail & Related papers (2024-03-26T06:12:38Z) - OPPO: An Ontology for Describing Fine-Grained Data Practices in Privacy Policies of Online Social Networks [0.8287206589886879]
Data practices of OPPO Social Networks (OSNS) comply with privacy regulations such as EU and CCPA.
This paper presents an On-Nology for Privacy Policies of OSNSs, that aims to fill gaps by formalizing detailed practices from OSNSs.
arXiv Detail & Related papers (2023-09-27T19:42:05Z) - PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English [77.79102359580702]
We introduce the Privacy Policy Language Understanding Evaluation benchmark, a multi-task benchmark for evaluating the privacy policy language understanding.
We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
arXiv Detail & Related papers (2022-12-20T05:58:32Z) - A Fine-grained Chinese Software Privacy Policy Dataset for Sequence
Labeling and Regulation Compliant Identification [23.14031861460124]
We construct the first Chinese privacy policy dataset, CA4P-483, to facilitate the sequence labeling tasks and regulation compliance identification.
Our dataset includes 483 Chinese Android application privacy policies, over 11K sentences, and 52K fine-grained annotations.
arXiv Detail & Related papers (2022-12-04T05:59:59Z) - Detecting Compliance of Privacy Policies with Data Protection Laws [0.0]
Privacy policies are often written in extensive legal jargon that is difficult to understand.
We aim to bridge that gap by providing a framework that analyzes privacy policies in light of various data protection laws.
By using such a tool, users would be better equipped to understand how their personal data is managed.
arXiv Detail & Related papers (2021-02-21T09:15:15Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z) - Preventing Imitation Learning with Adversarial Policy Ensembles [79.81807680370677]
Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy.
How can we protect against external observers cloning our proprietary policies?
We introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies.
arXiv Detail & Related papers (2020-01-31T01:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.