Federated Recommendation System via Differential Privacy
- URL: http://arxiv.org/abs/2005.06670v2
- Date: Sat, 16 May 2020 04:11:01 GMT
- Title: Federated Recommendation System via Differential Privacy
- Authors: Tan Li, Linqi Song and Christina Fragouli
- Abstract summary: We explore how differential privacy based Upper Confidence Bound (UCB) methods can be applied to multi-agent environments.
We provide a theoretical analysis on the privacy and regret performance of the proposed methods.
- Score: 31.0963615274522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we are interested in what we term the federated private
bandits framework, that combines differential privacy with multi-agent bandit
learning. We explore how differential privacy based Upper Confidence Bound
(UCB) methods can be applied to multi-agent environments, and in particular to
federated learning environments both in `master-worker' and `fully
decentralized' settings. We provide a theoretical analysis on the privacy and
regret performance of the proposed methods and explore the tradeoffs between
these two.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - Differentially Private Federated Learning: A Systematic Review [35.13641504685795]
We propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and scenarios.
Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
arXiv Detail & Related papers (2024-05-14T03:49:14Z) - Federated Learning on Riemannian Manifolds with Differential Privacy [8.75592575216789]
A malicious adversary can potentially infer sensitive information through various means.
We propose a generic private FL framework defined on the differential privacy (DP) technique.
We analyze the privacy guarantee while establishing the convergence properties.
Numerical simulations are performed on synthetic and real-world datasets to showcase the efficacy of the proposed PriRFed approach.
arXiv Detail & Related papers (2024-04-15T12:32:20Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - On Differentially Private Federated Linear Contextual Bandits [9.51828574518325]
We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy.
We identify three issues in the state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect regret bound due to noise miscalculation.
We show that our algorithm can achieve nearly optimal'' regret without a trusted server.
arXiv Detail & Related papers (2023-02-27T16:47:49Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Differentially-Private Federated Linear Bandits [15.609414012418043]
scFedUCB is a multiagent private algorithm for both centralized and decentralized (peer-to-peer) federated learning.
We provide a rigorous technical analysis of its utility in terms of regret, improving several results in cooperative bandit learning, and provide rigorous privacy guarantees as well.
arXiv Detail & Related papers (2020-10-22T03:58:39Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.