Federated Learning in Adversarial Settings
- URL: http://arxiv.org/abs/2010.07808v1
- Date: Thu, 15 Oct 2020 14:57:02 GMT
- Title: Federated Learning in Adversarial Settings
- Authors: Raouf Kerkouche, Gergely \'Acs and Claude Castelluccia
- Abstract summary: Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
- Score: 0.8701566919381224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning enables entities to collaboratively learn a shared
prediction model while keeping their training data locally. It prevents data
collection and aggregation and, therefore, mitigates the associated privacy
risks. However, it still remains vulnerable to various security attacks where
malicious participants aim at degrading the generated model, inserting
backdoors, or inferring other participants' training data. This paper presents
a new federated learning scheme that provides different trade-offs between
robustness, privacy, bandwidth efficiency, and model accuracy. Our scheme uses
biased quantization of model updates and hence is bandwidth efficient. It is
also robust against state-of-the-art backdoor as well as model degradation
attacks even when a large proportion of the participant nodes are malicious. We
propose a practical differentially private extension of this scheme which
protects the whole dataset of participating entities. We show that this
extension performs as efficiently as the non-private but robust scheme, even
with stringent privacy requirements but are less robust against model
degradation and backdoor attacks. This suggests a possible fundamental
trade-off between Differential Privacy and robustness.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning [18.1129191782913]
Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection.
Traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors.
In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants.
arXiv Detail & Related papers (2024-06-03T07:59:10Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - PPBFL: A Privacy Protected Blockchain-based Federated Learning Model [6.278098707317501]
We propose a Protected-based Federated Learning Model (PPBFL) to enhance the security of federated learning.
We introduce a Proof of Training Work (PoTW) algorithm tailored for federated learning, aiming to incentive training nodes.
We also propose a new mix transactions mechanism utilizing ring signature technology to better protect the identity privacy of local training clients.
arXiv Detail & Related papers (2024-01-02T13:13:28Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - Secure and Privacy-Preserving Federated Learning via Co-Utility [7.428782604099875]
We build a federated learning framework that offers privacy to the participating peers and security against Byzantine and poisoning attacks.
Unlike privacy protection via update aggregation, our approach preserves the values of model updates and hence the accuracy of plain federated learning.
arXiv Detail & Related papers (2021-08-04T08:58:24Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.