PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy
- URL: http://arxiv.org/abs/2110.11578v1
- Date: Fri, 22 Oct 2021 04:08:42 GMT
- Title: PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy
- Authors: Xiaolan Gu, Ming Li, Li Xiong
- Abstract summary: Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
- Score: 14.678119872268198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) allows multiple participating clients to train
machine learning models collaboratively by keeping their datasets local and
only exchanging model updates. Existing FL protocol designs have been shown to
be vulnerable to attacks that aim to compromise data privacy and/or model
robustness. Recently proposed defenses focused on ensuring either privacy or
robustness, but not both. In this paper, we develop a framework called PRECAD,
which simultaneously achieves differential privacy (DP) and enhances robustness
against model poisoning attacks with the help of cryptography. Using secure
multi-party computation (MPC) techniques (e.g., secret sharing), noise is added
to the model updates by the honest-but-curious server(s) (instead of each
client) without revealing clients' inputs, which achieves the benefit of
centralized DP in terms of providing a better privacy-utility tradeoff than
local DP based solutions. Meanwhile, a crypto-aided secure validation protocol
is designed to verify that the contribution of model update from each client is
bounded without leaking privacy. We show analytically that the noise added to
ensure DP also provides enhanced robustness against malicious model
submissions. We experimentally demonstrate that our PRECAD framework achieves
higher privacy-utility tradeoff and enhances robustness for the trained models.
Related papers
- Camel: Communication-Efficient and Maliciously Secure Federated Learning in the Shuffle Model of Differential Privacy [9.100955087185811]
Federated learning (FL) has rapidly become a compelling paradigm that enables multiple clients to jointly train a model by sharing only gradient updates for aggregation.
In order to protect the gradient updates which could also be privacy-sensitive, there has been a line of work studying local differential privacy mechanisms.
We present Camel, a new communication-efficient and maliciously secure FL framework in the shuffle model of DP.
arXiv Detail & Related papers (2024-10-04T13:13:44Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning [3.9166000694570076]
Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation.
DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks.
This work proposes three secret sharing-based dropout resilience approaches for privacy-preserving DL.
arXiv Detail & Related papers (2024-04-27T19:17:02Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning [2.4201849657206496]
Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
arXiv Detail & Related papers (2023-07-25T01:35:51Z) - DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum [11.68347496182345]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively.
Existing FL protocols are vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We focus on simultaneously achieving differential privacy (DP) and Byzantine robustness for cross-silo FL.
arXiv Detail & Related papers (2023-06-22T00:11:53Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.