Reconstructing Individual Data Points in Federated Learning Hardened
with Differential Privacy and Secure Aggregation
- URL: http://arxiv.org/abs/2301.04017v2
- Date: Wed, 12 Apr 2023 21:21:03 GMT
- Title: Reconstructing Individual Data Points in Federated Learning Hardened
with Differential Privacy and Secure Aggregation
- Authors: Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin
Shamsabadi, Ilia Shumailov, Nicolas Papernot
- Abstract summary: Federated learning (FL) is a framework for users to jointly train a machine learning model.
We propose an attack against FL protected with distributed differential privacy (DDP) and secure aggregation (SA)
- Score: 36.95590214441999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a framework for users to jointly train a machine
learning model. FL is promoted as a privacy-enhancing technology (PET) that
provides data minimization: data never "leaves" personal devices and users
share only model updates with a server (e.g., a company) coordinating the
distributed training. While prior work showed that in vanilla FL a malicious
server can extract users' private data from the model updates, in this work we
take it further and demonstrate that a malicious server can reconstruct user
data even in hardened versions of the protocol. More precisely, we propose an
attack against FL protected with distributed differential privacy (DDP) and
secure aggregation (SA). Our attack method is based on the introduction of
sybil devices that deviate from the protocol to expose individual users' data
for reconstruction by the server. The underlying root cause for the
vulnerability to our attack is a power imbalance: the server orchestrates the
whole protocol and users are given little guarantees about the selection of
other users participating in the protocol. Moving forward, we discuss
requirements for privacy guarantees in FL. We conclude that users should only
participate in the protocol when they trust the server or they apply local
primitives such as local DP, shifting power away from the server. Yet, the
latter approaches come at significant overhead in terms of performance
degradation of the trained model, making them less likely to be deployed in
practice.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - SaFL: Sybil-aware Federated Learning with Application to Face
Recognition [13.914187113334222]
Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
arXiv Detail & Related papers (2023-11-07T21:06:06Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
for Language Models [58.631918656336005]
We propose a novel attack that reveals private user text by deploying malicious parameter vectors.
Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding.
arXiv Detail & Related papers (2022-01-29T22:38:21Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Achieving Security and Privacy in Federated Learning Systems: Survey,
Research Challenges and Future Directions [6.460846767084875]
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients.
In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack.
arXiv Detail & Related papers (2020-12-12T13:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.