Oneshot Differentially Private Top-k Selection
- URL: http://arxiv.org/abs/2105.08233v1
- Date: Tue, 18 May 2021 02:18:01 GMT
- Title: Oneshot Differentially Private Top-k Selection
- Authors: Gang Qiao, Weijie J. Su, Li Zhang
- Abstract summary: We introduce a fast, low-distortion, and differentially private primitive for the top-$k$ problem.
Compared with existing approaches in the literature, our algorithm adds Laplace noise to the counts and releases the top-$k$ noisy counts and their estimates in a oneshot fashion.
- Score: 23.88111547236874
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Being able to efficiently and accurately select the top-$k$ elements without
privacy leakage is an integral component of various data analysis tasks and has
gained significant attention. In this paper, we introduce the \textit{oneshot
mechanism}, a fast, low-distortion, and differentially private primitive for
the top-$k$ problem. Compared with existing approaches in the literature, our
algorithm adds Laplace noise to the counts and releases the top-$k$ noisy
counts and their estimates in a oneshot fashion, thereby substantially reducing
the computational cost while maintaining satisfying utility. Our proof of
privacy for this mechanism relies on a novel coupling technique that is of
independent theoretical interest. Finally, we apply the oneshot mechanism to
multiple hypothesis testing and ranking from pairwise comparisons and thus
obtain their differentially private counterparts.
Related papers
- Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean
Estimation [8.9598796481325]
We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems.
We establish an information-computation gap for private sparse mean estimation.
We also give evidence for privacy-induced information-computation gaps for several other statistics and learning problems.
arXiv Detail & Related papers (2022-11-01T20:03:41Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - On a Utilitarian Approach to Privacy Preserving Text Generation [5.123298347655088]
We propose a class of differentially private mechanisms that parameterizes the nearest neighbor selection criterion in traditional mechanisms.
Motivated by Vickrey auction, where only the second highest price is revealed and the highest price is kept private, we balance the choice between the first and the second nearest neighbors.
Experiments on real text classification datasets show up to 50% improvement in utility compared to the existing state-of-the-art with the same empirical privacy guarantee.
arXiv Detail & Related papers (2021-04-23T23:13:43Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Differential Privacy of Hierarchical Census Data: An Optimization
Approach [53.29035917495491]
Census Bureaus are interested in releasing aggregate socio-economic data about a large population without revealing sensitive information about any individual.
Recent events have identified some of the privacy challenges faced by these organizations.
This paper presents a novel differential-privacy mechanism for releasing hierarchical counts of individuals.
arXiv Detail & Related papers (2020-06-28T18:19:55Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.