Network Shuffling: Privacy Amplification via Random Walks
- URL: http://arxiv.org/abs/2204.03919v1
- Date: Fri, 8 Apr 2022 08:36:06 GMT
- Title: Network Shuffling: Privacy Amplification via Random Walks
- Authors: Seng Pei Liew, Tsubasa Takahashi, Shun Takagi, Fumiyuki Kato, Yang
Cao, Masatoshi Yoshikawa
- Abstract summary: We introduce network shuffling, a decentralized mechanism where users exchange data in a random-walk fashion on a network/graph.
We show that the privacy amplification rate is similar to other privacy amplification techniques such as uniform shuffling.
- Score: 21.685747588753514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, it is shown that shuffling can amplify the central differential
privacy guarantees of data randomized with local differential privacy. Within
this setup, a centralized, trusted shuffler is responsible for shuffling by
keeping the identities of data anonymous, which subsequently leads to stronger
privacy guarantees for systems. However, introducing a centralized entity to
the originally local privacy model loses some appeals of not having any
centralized entity as in local differential privacy. Moreover, implementing a
shuffler in a reliable way is not trivial due to known security issues and/or
requirements of advanced hardware or secure computation technology.
Motivated by these practical considerations, we rethink the shuffle model to
relax the assumption of requiring a centralized, trusted shuffler. We introduce
network shuffling, a decentralized mechanism where users exchange data in a
random-walk fashion on a network/graph, as an alternative of achieving privacy
amplification via anonymity. We analyze the threat model under such a setting,
and propose distributed protocols of network shuffling that is straightforward
to implement in practice. Furthermore, we show that the privacy amplification
rate is similar to other privacy amplification techniques such as uniform
shuffling. To our best knowledge, among the recently studied intermediate trust
models that leverage privacy amplification techniques, our work is the first
that is not relying on any centralized entity to achieve privacy amplification.
Related papers
- Differentially private and decentralized randomized power method [15.955127242261808]
We propose a strategy to reduce the variance of the noise introduced to achieve Differential Privacy (DP)
We adapt the method to a decentralized framework with a low computational and communication overhead, while preserving the accuracy.
We show that it is possible to use a noise scale in the decentralized setting that is similar to the one in the centralized setting.
arXiv Detail & Related papers (2024-11-04T09:53:03Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy Amplification via Shuffled Check-Ins [2.3333090554192615]
We study a protocol for distributed computation called shuffled check-in.
It achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
We show that shuffled check-in achieves tight privacy guarantees through privacy amplification.
arXiv Detail & Related papers (2022-06-07T09:55:15Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.