Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus
- URL: http://arxiv.org/abs/2306.13892v2
- Date: Thu, 24 Oct 2024 22:49:22 GMT
- Title: Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus
- Authors: Jasmine Bayrooti, Zhan Gao, Amanda Prorok,
- Abstract summary: We propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training.
We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches.
- Score: 11.414398732656839
- License:
- Abstract: Cooperative decentralized learning relies on direct information exchange between communicating agents, each with access to locally available datasets. The goal is to agree on model parameters that are optimal over all data. However, sharing parameters with untrustworthy neighbors can incur privacy risks by leaking exploitable information. To enable trustworthy cooperative learning, we propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training. We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches, finding accuracies approaching the centralized baseline while ensuring individual data samples are resilient to inference attacks. Furthermore, we study the relationships between accuracy, privacy budget, and networks' graph properties on collaborative classification tasks, discovering a useful invariance to the communication graph structure beyond a threshold.
Related papers
- Personalized Federated Learning for Cross-view Geo-localization [49.40531019551957]
We propose a methodology combining Federated Learning (FL) with Cross-view Image Geo-localization (CVGL) techniques.
Our method implements a coarse-to-fine approach, where clients share only the coarse feature extractors while keeping fine-grained features specific to local environments.
Results demonstrate that our federated CVGL method achieves performance close to centralized training while maintaining data privacy.
arXiv Detail & Related papers (2024-11-07T13:25:52Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Collaborative Mean Estimation over Intermittently Connected Networks
with Peer-To-Peer Privacy [86.61829236732744]
This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity.
The goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes.
arXiv Detail & Related papers (2023-02-28T19:17:03Z) - Efficient Node Selection in Private Personalized Decentralized Learning [3.7784910521656654]
We propose Private Personalized Decentralized Learning (PPDL) to protect node privacy.
PPDL combines secure aggregation and correlated adversarial multi-armed bandit optimization.
We show that PPDL surpasses previous non-private methods in model performance on standard benchmarks.
arXiv Detail & Related papers (2023-01-30T10:01:18Z) - Privacy-preserving Decentralized Federated Learning over Time-varying
Communication Graph [5.649296652252663]
We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation.
The paper establishes the correctness and privacy properties of the proposed algorithm.
arXiv Detail & Related papers (2022-10-01T17:17:22Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy [1.911678487931003]
We propose a general execution model to control the data-dependence of communications in user-side decentralized computations.
Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling.
arXiv Detail & Related papers (2021-12-23T08:30:17Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.