Voting-based Approaches For Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2010.04851v2
- Date: Tue, 16 Feb 2021 00:34:52 GMT
- Title: Voting-based Approaches For Differentially Private Federated Learning
- Authors: Yuqing Zhu, Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud
Faraki, Manmohan chandraker and Yu-Xiang Wang
- Abstract summary: This work is inspired by knowledge transfer non-federated privacy learning from Papernot et al.
We design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients.
Our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
- Score: 87.2255217230752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially Private Federated Learning (DPFL) is an emerging field with
many applications. Gradient averaging based DPFL methods require costly
communication rounds and hardly work with large-capacity models, due to the
explicit dimension dependence in its added noise. In this work, inspired by
knowledge transfer non-federated privacy learning from Papernot et al.(2017;
2018), we design two new DPFL schemes, by voting among the data labels returned
from each local model, instead of averaging the gradients, which avoids the
dimension dependence and significantly reduces the communication cost.
Theoretically, by applying secure multi-party computation, we could
exponentially amplify the (data-dependent) privacy guarantees when the margin
of the voting scores are large. Extensive experiments show that our approaches
significantly improve the privacy-utility trade-off over the state-of-the-arts
in DPFL.
Related papers
- Privacy-preserving gradient-based fair federated learning [0.0]
Federated learning (FL) schemes allow multiple participants to collaboratively train neural networks without the need to share the underlying data.
In our paper, we build upon seminal works and present a novel, fair and privacy-preserving FL scheme.
arXiv Detail & Related papers (2024-07-18T19:56:39Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
FedLAP-DP is a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Learning with Local Differential Privacy: Trade-offs between
Privacy, Utility, and Communication [22.171647103023773]
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure.
We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD.
Our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
arXiv Detail & Related papers (2021-02-09T10:04:18Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy [20.95527613004989]
Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
arXiv Detail & Related papers (2020-07-31T01:08:57Z) - D2P-Fed: Differentially Private Federated Learning With Efficient
Communication [78.57321932088182]
We propose a unified scheme to achieve both differential privacy (DP) and communication efficiency in federated learning (FL)
In particular, compared with the only prior work taking care of both aspects, D2P-Fed provides stronger privacy guarantee, better composability and smaller communication cost.
The results show that D2P-Fed outperforms the-state-of-the-art by 4.7% to 13.0% in terms of model accuracy while saving one third of the communication cost.
arXiv Detail & Related papers (2020-06-22T06:46:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.