On Differentially Private Federated Linear Contextual Bandits
- URL: http://arxiv.org/abs/2302.13945v2
- Date: Wed, 31 May 2023 15:32:28 GMT
- Title: On Differentially Private Federated Linear Contextual Bandits
- Authors: Xingyu Zhou and Sayak Ray Chowdhury
- Abstract summary: We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy.
We identify three issues in the state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect regret bound due to noise miscalculation.
We show that our algorithm can achieve nearly optimal'' regret without a trusted server.
- Score: 9.51828574518325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider cross-silo federated linear contextual bandit (LCB) problem under
differential privacy, where multiple silos (agents) interact with the local
users and communicate via a central server to realize collaboration while
without sacrificing each user's privacy. We identify three issues in the
state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect
regret bound due to noise miscalculation and (iii) ungrounded communication
cost. To resolve these issues, we take a two-step principled approach. First,
we design an algorithmic framework consisting of a generic federated LCB
algorithm and flexible privacy protocols. Then, leveraging the proposed
framework, we study federated LCBs under two different privacy constraints. We
first establish privacy and regret guarantees under silo-level local
differential privacy, which fix the issues present in state-of-the-art
algorithm. To further improve the regret performance, we next consider shuffle
model of differential privacy, under which we show that our algorithm can
achieve nearly ``optimal'' regret without a trusted server. We accomplish this
via two different schemes -- one relies on a new result on privacy
amplification via shuffling for DP mechanisms and another one leverages the
integration of a shuffle protocol for vector sum into the tree-based mechanism,
both of which might be of independent interest. Finally, we support our
theoretical results with numerical evaluations over contextual bandit instances
generated from both synthetic and real-life data.
Related papers
- Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - Deciphering the Interplay between Local Differential Privacy, Average Bayesian Privacy, and Maximum Bayesian Privacy [5.622065847054885]
We introduce Bayesian privacy and delve into the relationship between LDP and its Bayesian counterparts, unveiling novel insights into utility-privacy trade-offs.
Our work not only lays the groundwork for future empirical exploration but also promises to facilitate the design of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Concentrated Differential Privacy for Bandits [11.086440815804227]
This paper contributes to the understanding of Differential Privacy (DP) in bandits with a trusted centralised decision-maker.
We propose three private algorithms, namely AdaC-UCB, AdaC-GOPE and AdaC-OFUL, for three bandit settings.
arXiv Detail & Related papers (2023-09-01T16:08:00Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Topology-aware Differential Privacy for Decentralized Image
Classification [81.2202290003513]
Top-DP is a novel solution to optimize the differential privacy protection of decentralized image classification systems.
We leverage the unique features of decentralized communication topologies to reduce the noise scale and improve the model usability.
arXiv Detail & Related papers (2020-06-14T06:42:21Z) - Differentially Private k-Means Clustering with Guaranteed Convergence [5.335316436366718]
Iterative clustering algorithms help us to learn the insights behind the data.
It may allow adversaries to infer the privacy of individuals with some background knowledge.
To protect individual privacy against such an inference attack, preserving differential privacy (DP) for the iterative clustering algorithms has been extensively studied.
arXiv Detail & Related papers (2020-02-03T22:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.