FedSel: Federated SGD under Local Differential Privacy with Top-k
Dimension Selection
- URL: http://arxiv.org/abs/2003.10637v1
- Date: Tue, 24 Mar 2020 03:31:21 GMT
- Title: FedSel: Federated SGD under Local Differential Privacy with Top-k
Dimension Selection
- Authors: Ruixuan Liu, Yang Cao, Masatoshi Yoshikawa, Hong Chen
- Abstract summary: In this work, we propose a two-stage framework FedSel for federated SGD under LDP.
Specifically, we propose three private dimension selection mechanisms and adapt the accumulation technique to stabilize the learning process with noisy updates.
We also theoretically analyze privacy, accuracy and time complexity of FedSel, which outperforms the state-of-the-art solutions.
- Score: 26.54574385850849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As massive data are produced from small gadgets, federated learning on mobile
devices has become an emerging trend. In the federated setting, Stochastic
Gradient Descent (SGD) has been widely used in federated learning for various
machine learning models. To prevent privacy leakages from gradients that are
calculated on users' sensitive data, local differential privacy (LDP) has been
considered as a privacy guarantee in federated SGD recently. However, the
existing solutions have a dimension dependency problem: the injected noise is
substantially proportional to the dimension $d$. In this work, we propose a
two-stage framework FedSel for federated SGD under LDP to relieve this problem.
Our key idea is that not all dimensions are equally important so that we
privately select Top-k dimensions according to their contributions in each
iteration of federated SGD. Specifically, we propose three private dimension
selection mechanisms and adapt the gradient accumulation technique to stabilize
the learning process with noisy updates. We also theoretically analyze privacy,
accuracy and time complexity of FedSel, which outperforms the state-of-the-art
solutions. Experiments on real-world and synthetic datasets verify the
effectiveness and efficiency of our framework.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation [15.023077875990614]
Federated learning (FL) allows clients to collaboratively train a global model without sharing their local data with a server.
Differential privacy (DP) addresses such leakage by providing formal privacy guarantees, with mechanisms that add randomness to the clients' contributions.
We propose an adaptation method that can be combined with differential privacy and call it DP-DyLoRA.
arXiv Detail & Related papers (2024-05-10T10:10:37Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Clients Collaborate: Flexible Differentially Private Federated Learning
with Guaranteed Improvement of Utility-Privacy Trade-off [34.2117116062642]
We introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, to strike a trade-off between model utility and user privacy.
We show that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space.
It observes significant performance improvements and strict privacy guarantees under different privacy settings.
arXiv Detail & Related papers (2024-02-10T17:39:34Z) - Privacy-Preserving Federated Learning over Vertically and Horizontally
Partitioned Data for Financial Anomaly Detection [11.167661320589488]
In real-world financial anomaly detection scenarios, the data is partitioned both vertically and horizontally.
Our solution combines fully homomorphic encryption (HE), secure multi-party computation (SMPC), differential privacy (DP)
Our solution won second prize in the first phase of the U.S. Privacy Enhancing Technologies (PETs) Prize Challenge.
arXiv Detail & Related papers (2023-10-30T06:51:33Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Privacy-Preserving Federated Learning via System Immersion and Random
Matrix Encryption [4.258856853258348]
Federated learning (FL) has emerged as a privacy solution for collaborative distributed learning where clients train AI models directly on their devices instead of sharing their data with a centralized (potentially adversarial) server.
We propose a Privacy-Preserving Federated Learning (PPFL) framework built on the synergy of matrix encryption and system immersion tools from control theory.
We show that our algorithm provides the same level of accuracy and convergence rate as the standard FL with a negligible cost while revealing no information about clients' data.
arXiv Detail & Related papers (2022-04-05T21:28:59Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Voting-based Approaches For Differentially Private Federated Learning [87.2255217230752]
This work is inspired by knowledge transfer non-federated privacy learning from Papernot et al.
We design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients.
Our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
arXiv Detail & Related papers (2020-10-09T23:55:19Z) - LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy [20.95527613004989]
Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
arXiv Detail & Related papers (2020-07-31T01:08:57Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.