Taypsi: Static Enforcement of Privacy Policies for Policy-Agnostic Oblivious Computation
- URL: http://arxiv.org/abs/2311.09393v4
- Date: Wed, 6 Mar 2024 18:49:04 GMT
- Title: Taypsi: Static Enforcement of Privacy Policies for Policy-Agnostic Oblivious Computation
- Authors: Qianchuan Ye, Benjamin Delaware,
- Abstract summary: Secure multiparty computation (MPC) techniques enable multiple parties to compute joint functions over their private data without sharing that data with other parties.
Most MPC languages force users to intermix programmatic and privacy concerns in a single application, making it difficult to change or audit a program's underlying privacy policy.
This work proposes to eliminate this overhead by instead transforming programs into semantically equivalent versions that statically enforce user-provided privacy policies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Secure multiparty computation (MPC) techniques enable multiple parties to compute joint functions over their private data without sharing that data with other parties, typically by employing powerful cryptographic protocols to protect individual's data. One challenge when writing such functions is that most MPC languages force users to intermix programmatic and privacy concerns in a single application, making it difficult to change or audit a program's underlying privacy policy. Prior policy-agnostic MPC languages relied on dynamic enforcement to decouple privacy requirements from program logic. Unfortunately, the resulting overhead makes it difficult to scale MPC applications that manipulate structured data. This work proposes to eliminate this overhead by instead transforming programs into semantically equivalent versions that statically enforce user-provided privacy policies. We have implemented this approach in a new MPC language, called Taypsi; our experimental evaluation demonstrates that the resulting system features considerable performance improvements on a variety of MPC applications involving structured data and complex privacy policies.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Interactive GDPR-Compliant Privacy Policy Generation for Software Applications [6.189770781546807]
To use software applications users are sometimes requested to provide their personal information.
As privacy has become a significant concern many protection regulations exist worldwide.
We propose an approach that generates comprehensive and compliant privacy policy.
arXiv Detail & Related papers (2024-10-04T01:22:16Z) - PolicyLR: A Logic Representation For Privacy Policies [34.73520882451813]
We propose PolicyLR, a new paradigm that offers a comprehensive machine-readable representation of privacy policies.
PolicyLR converts privacy policies into a machine-readable format using valuations of atomic formulae.
We demonstrate PolicyLR in three privacy tasks: Policy Compliance, Inconsistency Detection and Privacy Comparison Shopping.
arXiv Detail & Related papers (2024-08-27T07:27:16Z) - Low-Latency Privacy-Preserving Deep Learning Design via Secure MPC [31.35072624488929]
Secure multi-party computation (MPC) facilitates privacy-preserving computation between multiple parties without leaking private information.
This work proposes a low-latency secret-sharing-based MPC design that reduces unnecessary communication rounds during the execution of MPC protocols.
arXiv Detail & Related papers (2024-07-24T07:01:21Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Libertas: Privacy-Preserving Computation for Decentralised Personal Data Stores [19.54818218429241]
We propose a modular design for integrating Secure Multi-Party Computation with Solid.
Our architecture, Libertas, requires no protocol level changes in the underlying design of Solid.
We show how this can be combined with existing differential privacy techniques to also ensure output privacy.
arXiv Detail & Related papers (2023-09-28T12:07:40Z) - PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English [77.79102359580702]
We introduce the Privacy Policy Language Understanding Evaluation benchmark, a multi-task benchmark for evaluating the privacy policy language understanding.
We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
arXiv Detail & Related papers (2022-12-20T05:58:32Z) - Imitation Learning from MPC for Quadrupedal Multi-Gait Control [63.617157490920505]
We present a learning algorithm for training a single policy that imitates multiple gaits of a walking robot.
We use and extend MPC-Net, which is an Imitation Learning approach guided by Model Predictive Control.
We validate our approach on hardware and show that a single learned policy can replace its teacher to control multiple gaits.
arXiv Detail & Related papers (2021-03-26T08:48:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.