Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model
- URL: http://arxiv.org/abs/2304.05516v2
- Date: Fri, 26 May 2023 16:04:43 GMT
- Title: Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model
- Authors: Yixuan Liu, Suyun Zhao, Li Xiong, Yuhan Liu, Hong Chen
- Abstract summary: Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
- Score: 21.077469463027306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning, as a popular paradigm for collaborative training, is
vulnerable against privacy attacks. Different privacy levels regarding users'
attitudes need to be satisfied locally, while a strict privacy guarantee for
the global model is also required centrally. Personalized Local Differential
Privacy (PLDP) is suitable for preserving users' varying local privacy, yet
only provides a central privacy guarantee equivalent to the worst-case local
privacy level. Thus, achieving strong central privacy as well as personalized
local privacy with a utility-promising model is a challenging problem. In this
work, a general framework (APES) is built up to strengthen model privacy under
personalized local privacy by leveraging the privacy amplification effect of
the shuffle model. To tighten the privacy bound, we quantify the heterogeneous
contributions to the central privacy user by user. The contributions are
characterized by the ability of generating "echos" from the perturbation of
each user, which is carefully measured by proposed methods Neighbor Divergence
and Clip-Laplace Mechanism. Furthermore, we propose a refined framework
(S-APES) with the post-sparsification technique to reduce privacy loss in
high-dimension scenarios. To the best of our knowledge, the impact of shuffling
on personalized local privacy is considered for the first time. We provide a
strong privacy amplification effect, and the bound is tighter than the baseline
result based on existing methods for uniform local privacy. Experiments
demonstrate that our frameworks ensure comparable or higher accuracy for the
global model.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free [13.198689566654103]
This work considers the problem of mean estimation under heterogeneous Differential Privacy constraints.
The algorithm we propose is shown to be minimax optimal when there are two groups of users with distinct privacy levels.
arXiv Detail & Related papers (2023-04-27T05:23:06Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Network Shuffling: Privacy Amplification via Random Walks [21.685747588753514]
We introduce network shuffling, a decentralized mechanism where users exchange data in a random-walk fashion on a network/graph.
We show that the privacy amplification rate is similar to other privacy amplification techniques such as uniform shuffling.
arXiv Detail & Related papers (2022-04-08T08:36:06Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.