Shuffle Private Linear Contextual Bandits
- URL: http://arxiv.org/abs/2202.05567v1
- Date: Fri, 11 Feb 2022 11:53:22 GMT
- Title: Shuffle Private Linear Contextual Bandits
- Authors: Sayak Ray Chowdhury and Xingyu Zhou
- Abstract summary: We propose a general framework for linear contextual bandits under the shuffle algorithmic trust model.
We prove that both instantiations lead to regret guarantees that significantly improve on that of the local model.
We also verify this regret behavior with simulations on synthetic data.
- Score: 9.51828574518325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential privacy (DP) has been recently introduced to linear contextual
bandits to formally address the privacy concerns in its associated personalized
services to participating users (e.g., recommendations). Prior work largely
focus on two trust models of DP: the central model, where a central server is
responsible for protecting users sensitive data, and the (stronger) local
model, where information needs to be protected directly on user side. However,
there remains a fundamental gap in the utility achieved by learning algorithms
under these two privacy models, e.g., $\tilde{O}(\sqrt{T})$ regret in the
central model as compared to $\tilde{O}(T^{3/4})$ regret in the local model, if
all users are unique within a learning horizon $T$. In this work, we aim to
achieve a stronger model of trust than the central model, while suffering a
smaller regret than the local model by considering recently popular shuffle
model of privacy. We propose a general algorithmic framework for linear
contextual bandits under the shuffle trust model, where there exists a trusted
shuffler in between users and the central server, that randomly permutes a
batch of users data before sending those to the server. We then instantiate
this framework with two specific shuffle protocols: one relying on privacy
amplification of local mechanisms, and another incorporating a protocol for
summing vectors and matrices of bounded norms. We prove that both these
instantiations lead to regret guarantees that significantly improve on that of
the local model, and can potentially be of the order $\tilde{O}(T^{3/5})$ if
all users are unique. We also verify this regret behavior with simulations on
synthetic data. Finally, under the practical scenario of non-unique users, we
show that the regret of our shuffle private algorithm scale as
$\tilde{O}(T^{2/3})$, which matches that the central model could achieve in
this case.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - Differentially Private Aggregation via Imperfect Shuffling [64.19885806149958]
We introduce the imperfect shuffle differential privacy model, where messages are shuffled in an almost uniform manner before being observed by a curator for private aggregation.
We show that surprisingly, there is no additional error overhead necessary in the imperfect shuffle model.
arXiv Detail & Related papers (2023-08-28T17:34:52Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Distributed Differential Privacy in Multi-Armed Bandits [9.51828574518325]
We consider the standard $K$-armed bandit problem under a distributed trust model of differential privacy (DP)
We aim to obtain a pure-DP guarantee under distributed trust model while sacrificing no more regret than that under central trust model.
arXiv Detail & Related papers (2022-06-12T15:37:36Z) - Tight Differential Privacy Guarantees for the Shuffle Model with $k$-Randomized Response [6.260747047974035]
Most differentially private (DP) algorithms assume a third party inserts noise to queries made on datasets, or a local model where the users locally perturb their data.
The recently proposed shuffle model is an intermediate framework between the central and the local paradigms.
We perform experiments on both synthetic and real data to compare the privacy-utility trade-off of the shuffle model with that of the central one privatized.
arXiv Detail & Related papers (2022-05-18T10:44:28Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - The Limits of Pan Privacy and Shuffle Privacy for Learning and
Estimation [3.2942333712377083]
We show that for a variety of high-dimensional learning and estimation problems, the shuffle model and the pan-private model incur an exponential price in sample complexity relative to the central model.
Our work gives the first non-trivial lower bounds for these problems for both the pan-private model and the general multi-message shuffle model.
arXiv Detail & Related papers (2020-09-17T01:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.