Securing Federated Learning against Overwhelming Collusive Attackers
- URL: http://arxiv.org/abs/2209.14093v1
- Date: Wed, 28 Sep 2022 13:41:04 GMT
- Title: Securing Federated Learning against Overwhelming Collusive Attackers
- Authors: Priyesh Ranjan, Ashish Gupta, Federico Cor\`o, and Sajal K. Das
- Abstract summary: We propose two graph theoretic algorithms, based on Minimum Spanning Tree and k-Densest graph, by leveraging correlations between local models.
Our FL model can nullify the influence of attackers even when they are up to 70% of all the clients.
We establish the superiority of our algorithms over the existing ones using accuracy, attack success rate, and early detection round.
- Score: 7.587927338603662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of a data-driven society with the ubiquity of Internet of Things
(IoT) devices storing large amounts of data localized at different places,
distributed learning has gained a lot of traction, however, assuming
independent and identically distributed data (iid) across the devices. While
relaxing this assumption that anyway does not hold in reality due to the
heterogeneous nature of devices, federated learning (FL) has emerged as a
privacy-preserving solution to train a collaborative model over non-iid data
distributed across a massive number of devices. However, the appearance of
malicious devices (attackers), who intend to corrupt the FL model, is
inevitable due to unrestricted participation. In this work, we aim to identify
such attackers and mitigate their impact on the model, essentially under a
setting of bidirectional label flipping attacks with collusion. We propose two
graph theoretic algorithms, based on Minimum Spanning Tree and k-Densest graph,
by leveraging correlations between local models. Our FL model can nullify the
influence of attackers even when they are up to 70% of all the clients whereas
prior works could not afford more than 50% of clients as attackers. The
effectiveness of our algorithms is ascertained through experiments on two
benchmark datasets, namely MNIST and Fashion-MNIST, with overwhelming
attackers. We establish the superiority of our algorithms over the existing
ones using accuracy, attack success rate, and early detection round.
Related papers
- Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Federated Learning Under Attack: Exposing Vulnerabilities through Data
Poisoning Attacks in Computer Networks [17.857547954232754]
Federated Learning (FL) is a machine learning approach that enables multiple decentralized devices or edge servers to collaboratively train a shared model without exchanging raw data.
During the training and sharing of model updates between clients and servers, data and models are susceptible to different data-poisoning attacks.
We considered two types of data-poisoning attacks, label flipping (LF) and feature poisoning (FP), and applied them with a novel approach.
arXiv Detail & Related papers (2024-03-05T14:03:15Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Robust Federated Learning for execution time-based device model
identification under label-flipping attack [0.0]
Device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched.
Several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques.
New approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup.
arXiv Detail & Related papers (2021-11-29T10:27:14Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing [55.012801269326594]
In Byzantine robust distributed learning, a central server wants to train a machine learning model over data distributed across multiple workers.
A fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages.
We propose a simple bucketing scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost.
arXiv Detail & Related papers (2020-06-16T17:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.