Graph-Homomorphic Perturbations for Private Decentralized Learning
- URL: http://arxiv.org/abs/2010.12288v1
- Date: Fri, 23 Oct 2020 10:35:35 GMT
- Title: Graph-Homomorphic Perturbations for Private Decentralized Learning
- Authors: Stefan Vlaski, Ali H. Sayed
- Abstract summary: Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
- Score: 64.26238893241322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized algorithms for stochastic optimization and learning rely on the
diffusion of information as a result of repeated local exchanges of
intermediate estimates. Such structures are particularly appealing in
situations where agents may be hesitant to share raw data due to privacy
concerns. Nevertheless, in the absence of additional privacy-preserving
mechanisms, the exchange of local estimates, which are generated based on
private data can allow for the inference of the data itself. The most common
mechanism for guaranteeing privacy is the addition of perturbations to local
estimates before broadcasting. These perturbations are generally chosen
independently at every agent, resulting in a significant performance loss. We
propose an alternative scheme, which constructs perturbations according to a
particular nullspace condition, allowing them to be invisible (to first order
in the step-size) to the network centroid, while preserving privacy guarantees.
The analysis allows for general nonconvex loss functions, and is hence
applicable to a large number of machine learning and signal processing
problems, including deep learning.
Related papers
- Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Differentially Private Distributed Estimation and Learning [2.4401219403555814]
We study distributed estimation and learning problems in a networked environment.
Agents exchange information to estimate unknown statistical properties of random variables from privately observed samples.
Agents can estimate the unknown quantities by exchanging information about their private observations, but they also face privacy risks.
arXiv Detail & Related papers (2023-06-28T01:41:30Z) - Differentially Private Distributed Convex Optimization [0.0]
In distributed optimization, multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives.
Locally stored data are not shared with other agents, which could limit the practical usage of DO in applications with sensitive data.
We propose a privacy-preserving DO algorithm for constrained convex optimization models.
arXiv Detail & Related papers (2023-02-28T12:07:27Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.