Distributed Differential Privacy in Multi-Armed Bandits
- URL: http://arxiv.org/abs/2206.05772v1
- Date: Sun, 12 Jun 2022 15:37:36 GMT
- Title: Distributed Differential Privacy in Multi-Armed Bandits
- Authors: Sayak Ray Chowdhury, Xingyu Zhou
- Abstract summary: We consider the standard $K$-armed bandit problem under a distributed trust model of differential privacy (DP)
We aim to obtain a pure-DP guarantee under distributed trust model while sacrificing no more regret than that under central trust model.
- Score: 9.51828574518325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the standard $K$-armed bandit problem under a distributed trust
model of differential privacy (DP), which enables to guarantee privacy without
a trustworthy server. Under this trust model, previous work largely focus on
achieving privacy using a shuffle protocol, where a batch of users data are
randomly permuted before sending to a central server. This protocol achieves
($\epsilon,\delta$) or approximate-DP guarantee by sacrificing an additional
additive $O\!\left(\!\frac{K\log T\sqrt{\log(1/\delta)}}{\epsilon}\!\right)\!$
cost in $T$-step cumulative regret. In contrast, the optimal privacy cost for
achieving a stronger ($\epsilon,0$) or pure-DP guarantee under the widely used
central trust model is only $\Theta\!\left(\!\frac{K\log
T}{\epsilon}\!\right)\!$, where, however, a trusted server is required. In this
work, we aim to obtain a pure-DP guarantee under distributed trust model while
sacrificing no more regret than that under central trust model. We achieve this
by designing a generic bandit algorithm based on successive arm elimination,
where privacy is guaranteed by corrupting rewards with an equivalent discrete
Laplace noise ensured by a secure computation protocol. We also show that our
algorithm, when instantiated with Skellam noise and the secure protocol,
ensures \emph{R\'{e}nyi differential privacy} -- a stronger notion than
approximate DP -- under distributed trust model with a privacy cost of
$O\!\left(\!\frac{K\sqrt{\log T}}{\epsilon}\!\right)\!$.
Related papers
- Differential Privacy on Trust Graphs [54.55190841518906]
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data.
We give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP.
arXiv Detail & Related papers (2024-10-15T20:31:04Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Shuffle Private Linear Contextual Bandits [9.51828574518325]
We propose a general framework for linear contextual bandits under the shuffle algorithmic trust model.
We prove that both instantiations lead to regret guarantees that significantly improve on that of the local model.
We also verify this regret behavior with simulations on synthetic data.
arXiv Detail & Related papers (2022-02-11T11:53:22Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Infinitely Divisible Noise in the Low Privacy Regime [9.39772079241093]
Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning.
We present the first divisible infinitely noise distribution for real-valued data that achieves $varepsilon$-differential privacy.
arXiv Detail & Related papers (2021-10-13T08:16:43Z) - On the Renyi Differential Privacy of the Shuffle Model [25.052851351062845]
In the shuffle model, each of the $n$ clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client.
The principal result in this paper is the first non-trivial guarantee for general discrete local randomization mechanisms in the shuffled privacy model.
arXiv Detail & Related papers (2021-05-11T16:34:09Z) - Frequency Estimation Under Multiparty Differential Privacy: One-shot and
Streaming [10.952006057356714]
We study the fundamental problem of frequency estimation under both privacy and communication constraints, where the data is distributed among $k$ parties.
We adopt the model of multiparty differential privacy (MDP), which is more general than local differential privacy (LDP) and (centralized) differential privacy.
Our protocols achieve optimality (up to logarithmic factors) permissible by the more stringent of the two constraints.
arXiv Detail & Related papers (2021-04-05T08:15:20Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.