Fast Compliance Checking with General Vocabularies
- URL: http://arxiv.org/abs/2001.06322v1
- Date: Thu, 16 Jan 2020 09:08:00 GMT
- Title: Fast Compliance Checking with General Vocabularies
- Authors: P. A. Bonatti, L. Ioffredo, I. M. Petrova, L. Sauro
- Abstract summary: We introduce an OWL2 profile for representing data protection policies.
With this language, a company's data usage policy can be checked for compliance with data subjects' consent.
We exploit IBQ reasoning to integrate specialized reasoners for the policy language and the vocabulary's language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of complying with the GDPR while processing and
transferring personal data on the web. For this purpose we introduce an
extensible profile of OWL2 for representing data protection policies. With this
language, a company's data usage policy can be checked for compliance with data
subjects' consent and with a formalized fragment of the GDPR by means of
subsumption queries. The outer structure of the policies is restricted in order
to make compliance checking highly scalable, as required when processing
high-frequency data streams or large data volumes. However, the vocabularies
for specifying policy properties can be chosen rather freely from expressive
Horn fragments of OWL2. We exploit IBQ reasoning to integrate specialized
reasoners for the policy language and the vocabulary's language. Our
experiments show that this approach significantly improves performance.
Related papers
- Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.
We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.
We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Context-DPO: Aligning Language Models for Context-Faithfulness [80.62221491884353]
We propose the first alignment method specifically designed to enhance large language models' context-faithfulness.
By leveraging faithful and stubborn responses to questions with provided context from ConFiQA, our Context-DPO aligns LLMs through direct preference optimization.
Extensive experiments demonstrate that our Context-DPO significantly improves context-faithfulness, achieving 35% to 280% improvements on popular open-source models.
arXiv Detail & Related papers (2024-12-18T04:08:18Z) - ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods [56.073335779595475]
We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA)
ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context.
We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset.
arXiv Detail & Related papers (2024-06-23T00:23:13Z) - Controlled Query Evaluation through Epistemic Dependencies [7.502796412126707]
We show the expressive abilities of our framework and study the data complexity of CQE for (unions of) conjunctive queries.
We prove tractability for the case of acyclic dependencies by providing a suitable query algorithm.
arXiv Detail & Related papers (2024-05-03T19:48:07Z) - Disjunctive Policies for Database-Backed Programs [4.220713004424807]
A formal semantic model of disjunctive dependencies, the Quantale of Information, was recently introduced by Hunt and Sands.
We introduce a new query-based structure which captures the ordering of disjunctive information in databases.
We design a sound enforcement mechanism to check disjunctive policies for database-backed programs.
arXiv Detail & Related papers (2023-12-16T13:16:22Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Retrieval Enhanced Data Augmentation for Question Answering on Privacy
Policies [74.01792675564218]
We develop a data augmentation framework based on ensembling retriever models that captures relevant text segments from unlabeled policy documents.
To improve the diversity and quality of the augmented data, we leverage multiple pre-trained language models (LMs) and cascade them with noise reduction filter models.
Using our augmented data on the PrivacyQA benchmark, we elevate the existing baseline by a large margin (10% F1) and achieve a new state-of-the-art F1 score of 50%.
arXiv Detail & Related papers (2022-04-19T15:45:23Z) - PolicyQA: A Reading Comprehension Dataset for Privacy Policies [77.79102359580702]
We present PolicyQA, a dataset that contains 25,017 reading comprehension style examples curated from an existing corpus of 115 website privacy policies.
We evaluate two existing neural QA models and perform rigorous analysis to reveal the advantages and challenges offered by PolicyQA.
arXiv Detail & Related papers (2020-10-06T09:04:58Z) - Real Time Reasoning in OWL2 for GDPR Compliance [0.0]
We show how knowledge representation and reasoning techniques can be used to support organizations in complying with the new European data protection regulation.
This work is carried out in a European H2020 project called SPECIAL.
arXiv Detail & Related papers (2020-01-15T15:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.