Locality Sensitive Hashing with Extended Differential Privacy
- URL: http://arxiv.org/abs/2010.09393v5
- Date: Thu, 12 Aug 2021 10:16:59 GMT
- Title: Locality Sensitive Hashing with Extended Differential Privacy
- Authors: Natasha Fernandes, Yusuke Kawamoto, Takao Murakami
- Abstract summary: Extended differential privacy is a generalization of standard differential privacy using a general metric.
Existing works on extended DP are limited to few metrics, such as the Euclidean metric.
We propose a couple of mechanisms providing extended DP with angular distance.
- Score: 4.328040096675791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extended differential privacy, a generalization of standard differential
privacy (DP) using a general metric, has been widely studied to provide
rigorous privacy guarantees while keeping high utility. However, existing works
on extended DP are limited to few metrics, such as the Euclidean metric.
Consequently, they have only a small number of applications, such as
location-based services and document processing. In this paper, we propose a
couple of mechanisms providing extended DP with a different metric: angular
distance (or cosine distance). Our mechanisms are based on locality sensitive
hashing (LSH), which can be applied to the angular distance and work well for
personal data in a high-dimensional space. We theoretically analyze the privacy
properties of our mechanisms, and prove extended DP for input data by taking
into account that LSH preserves the original metric only approximately. We
apply our mechanisms to friend matching based on high-dimensional personal data
with angular distance in the local model, and evaluate our mechanisms using two
real datasets. We show that LDP requires a very large privacy budget and that
RAPPOR does not work in this application. Then we show that our mechanisms
enable friend matching with high utility and rigorous privacy guarantees based
on extended DP.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Metric Differential Privacy at the User-Level Via the Earth Mover's Distance [34.63551774740707]
Metric differential privacy (DP) provides heterogeneous privacy guarantees based on a distance between the pair of inputs.
In this paper, we initiate the study of one natural definition of metric DP at the user-level.
We design two novel mechanisms under $d_textsfEM$-DP to answer linear queries and item-wise queries.
arXiv Detail & Related papers (2024-05-04T13:29:11Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Conciliating Privacy and Utility in Data Releases via Individual Differential Privacy and Microaggregation [4.287502453001108]
$epsilon$-Differential privacy (DP) is a well-known privacy model that offers strong privacy guarantees.
We propose $epsilon$-individual differential privacy (iDP), which causes less data distortion while providing the same protection as DP to subjects.
We report on experiments that show how our approach can provide strong privacy (small $epsilon$) while yielding protected data that do not significantly degrade the accuracy of secondary data analysis.
arXiv Detail & Related papers (2023-12-21T10:23:18Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Personalized DP-SGD using Sampling Mechanisms [5.50042037663784]
We extend Differentially Private Gradient Descent (DP-SGD) to support a recent privacy notion called ($Phi$,$Delta$)- Personalized Differential Privacy (($Phi$,$Delta$)- PDP.
Our algorithm uses a multi-round personalized sampling mechanism and embeds it within the DP-SGD iteration.
Experiments on real datasets show that our algorithm outperforms DP-SGD and simple combinations of DP-SGD with existing PDP mechanisms.
arXiv Detail & Related papers (2023-05-24T13:56:57Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - A One-Pass Private Sketch for Most Machine Learning Tasks [48.17461258268463]
Differential privacy (DP) is a compelling privacy definition that explains the privacy-utility tradeoff via formal, provable guarantees.
We propose a private sketch that supports a multitude of machine learning tasks including regression, classification, density estimation, and more.
Our sketch consists of randomized contingency tables that are indexed with locality-sensitive hashing and constructed with an efficient one-pass algorithm.
arXiv Detail & Related papers (2020-06-16T17:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.