Voting-based Approaches For Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2010.04851v2
- Date: Tue, 16 Feb 2021 00:34:52 GMT
- Title: Voting-based Approaches For Differentially Private Federated Learning
- Authors: Yuqing Zhu, Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud
Faraki, Manmohan chandraker and Yu-Xiang Wang
- Abstract summary: This work is inspired by knowledge transfer non-federated privacy learning from Papernot et al.
We design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients.
Our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
- Score: 87.2255217230752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially Private Federated Learning (DPFL) is an emerging field with
many applications. Gradient averaging based DPFL methods require costly
communication rounds and hardly work with large-capacity models, due to the
explicit dimension dependence in its added noise. In this work, inspired by
knowledge transfer non-federated privacy learning from Papernot et al.(2017;
2018), we design two new DPFL schemes, by voting among the data labels returned
from each local model, instead of averaging the gradients, which avoids the
dimension dependence and significantly reduces the communication cost.
Theoretically, by applying secure multi-party computation, we could
exponentially amplify the (data-dependent) privacy guarantees when the margin
of the voting scores are large. Extensive experiments show that our approaches
significantly improve the privacy-utility trade-off over the state-of-the-arts
in DPFL.
Related papers
- Private and Communication-Efficient Federated Learning based on Differentially Private Sketches [0.4533408985664949]
Federated learning (FL) faces two primary challenges: the risk of privacy leakage and communication inefficiencies.
We propose DPSFL, a federated learning method that utilizes differentially private sketches.
We provide a theoretical analysis of privacy and convergence for the proposed method.
arXiv Detail & Related papers (2024-10-08T06:50:41Z) - DP$^2$-FedSAM: Enhancing Differentially Private Federated Learning Through Personalized Sharpness-Aware Minimization [8.022417295372492]
Federated learning (FL) is a distributed machine learning approach that allows multiple clients to collaboratively train a model without sharing their raw data.
To prevent sensitive information from being inferred through the model updates shared in FL, differentially private federated learning (DPFL) has been proposed.
DPFL ensures formal and rigorous privacy protection in FL by clipping and adding random noise to the shared model updates.
We propose DP$2$-FedSAM: Differentially Private and Personalized Federated Learning with Sharpness-Aware Minimization.
arXiv Detail & Related papers (2024-09-20T16:49:01Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Learning with Local Differential Privacy: Trade-offs between
Privacy, Utility, and Communication [22.171647103023773]
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure.
We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD.
Our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
arXiv Detail & Related papers (2021-02-09T10:04:18Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - D2P-Fed: Differentially Private Federated Learning With Efficient
Communication [78.57321932088182]
We propose a unified scheme to achieve both differential privacy (DP) and communication efficiency in federated learning (FL)
In particular, compared with the only prior work taking care of both aspects, D2P-Fed provides stronger privacy guarantee, better composability and smaller communication cost.
The results show that D2P-Fed outperforms the-state-of-the-art by 4.7% to 13.0% in terms of model accuracy while saving one third of the communication cost.
arXiv Detail & Related papers (2020-06-22T06:46:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.