Differentially Private Aggregation via Imperfect Shuffling
- URL: http://arxiv.org/abs/2308.14733v1
- Date: Mon, 28 Aug 2023 17:34:52 GMT
- Title: Differentially Private Aggregation via Imperfect Shuffling
- Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson, Samson Zhou,
- Abstract summary: We introduce the imperfect shuffle differential privacy model, where messages are shuffled in an almost uniform manner before being observed by a curator for private aggregation.
We show that surprisingly, there is no additional error overhead necessary in the imperfect shuffle model.
- Score: 64.19885806149958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce the imperfect shuffle differential privacy model, where messages sent from users are shuffled in an almost uniform manner before being observed by a curator for private aggregation. We then consider the private summation problem. We show that the standard split-and-mix protocol by Ishai et. al. [FOCS 2006] can be adapted to achieve near-optimal utility bounds in the imperfect shuffle model. Specifically, we show that surprisingly, there is no additional error overhead necessary in the imperfect shuffle model.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Efficient Fault-Tolerant Quantum Protocol for Differential Privacy in the Shuffle Model [2.0794380287086214]
We present a quantum protocol which implicitly implements a random shuffle to realize differential privacy in the shuffle model.
The shuffle model amplifies outcomes achievable from data contributors.
Examples include implementing the shuffle via mix-networks, or shuffling via a trusted third-party.
We propose a quantum version of the protocol using entanglement of quantum states and show that the shuffle can be implemented without these extra requirements.
arXiv Detail & Related papers (2024-09-06T04:53:19Z) - Optimal Private Discrete Distribution Estimation with One-bit Communication [63.413106413939836]
We consider a private discrete distribution estimation problem with one-bit communication constraint.
We characterize the first-orders of the worst-case trade-off under the one-bit communication constraint.
These results demonstrate the optimal dependence of the privacy-utility trade-off under the one-bit communication constraint.
arXiv Detail & Related papers (2023-10-17T05:21:19Z) - Concurrent Shuffle Differential Privacy Under Continual Observation [60.127179363119936]
We study the private continual summation problem (a.k.a. the counter problem)
We show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model.
arXiv Detail & Related papers (2023-01-29T20:42:54Z) - Tight Differential Privacy Guarantees for the Shuffle Model with $k$-Randomized Response [6.260747047974035]
Most differentially private (DP) algorithms assume a third party inserts noise to queries made on datasets, or a local model where the users locally perturb their data.
The recently proposed shuffle model is an intermediate framework between the central and the local paradigms.
We perform experiments on both synthetic and real data to compare the privacy-utility trade-off of the shuffle model with that of the central one privatized.
arXiv Detail & Related papers (2022-05-18T10:44:28Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Shuffle Private Linear Contextual Bandits [9.51828574518325]
We propose a general framework for linear contextual bandits under the shuffle algorithmic trust model.
We prove that both instantiations lead to regret guarantees that significantly improve on that of the local model.
We also verify this regret behavior with simulations on synthetic data.
arXiv Detail & Related papers (2022-02-11T11:53:22Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - On the Round Complexity of the Shuffle Model [25.454520433661646]
The shuffle model of differential privacy was proposed as a viable model for performing distributed differentially private computations.
We show how two parties can use one round of the shuffle to send secret messages without having to first establish a secret key.
We examine differentially private computations in the shuffle model that do not require the assumption of an honest majority.
arXiv Detail & Related papers (2020-09-28T17:57:42Z) - Connecting Robust Shuffle Privacy and Pan-Privacy [11.367579037903734]
In the emphshuffle model of differential privacy, data-holding users send randomized messages to a secure shuffler, and the shuffler permutes the messages.
In the emphpan-private model, an algorithm processes a stream of data while maintaining an internal state that is differentially private with regard to the stream data.
Our results focus on emphrobustly shuffle private protocols, whose privacy guarantees are not greatly affected by malicious users.
arXiv Detail & Related papers (2020-04-20T17:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.