Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning
- URL: http://arxiv.org/abs/2403.06319v1
- Date: Sun, 10 Mar 2024 21:37:21 GMT
- Title: Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning
- Authors: Hamid Mozaffari, Sunav Choudhary, and Amir Houmansadr
- Abstract summary: We present a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the literature.
To connect existing adversary models, we present a hybrid adversary model, which lies in the middle of the spectrum of adversaries.
We aim to provide practitioners and researchers with a clear understanding of the different types of threats they need to consider when designing FL systems.
- Score: 15.91062695812289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed machine learning paradigm that
enables training models on decentralized data. The field of FL security against
poisoning attacks is plagued with confusion due to the proliferation of
research that makes different assumptions about the capabilities of adversaries
and the adversary models they operate under. Our work aims to clarify this
confusion by presenting a comprehensive analysis of the various poisoning
attacks and defensive aggregation rules (AGRs) proposed in the literature, and
connecting them under a common framework. To connect existing adversary models,
we present a hybrid adversary model, which lies in the middle of the spectrum
of adversaries, where the adversary compromises a few clients, trains a
generative (e.g., DDPM) model with their compromised samples, and generates new
synthetic data to solve an optimization for a stronger (e.g., cheaper, more
practical) attack against different robust aggregation rules. By presenting the
spectrum of FL adversaries, we aim to provide practitioners and researchers
with a clear understanding of the different types of threats they need to
consider when designing FL systems, and identify areas where further research
is needed.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning [3.699715556687871]
Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data.
FL models can suffer from biases against certain demographic groups due to the heterogeneity of data and party selection.
We propose a new type of model poisoning attack, EAB-FL, with a focus on exacerbating group unfairness while maintaining a good level of model utility.
arXiv Detail & Related papers (2024-10-02T21:22:48Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective [8.941193384980147]
We focus on threat models targeting the learning process of FL systems.
Defense strategies have evolved from using a singular metric to excluding malicious clients.
Recent endeavors subtly alter the least significant weights in local models to bypass defense measures.
arXiv Detail & Related papers (2023-11-27T18:32:08Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning [11.117880929232575]
Federated learning is vulnerable to Byzatine poisoning adversarial attacks.
We propose a dynamic aggregation operator that dynamically discards those adversarial clients.
The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model.
arXiv Detail & Related papers (2020-07-29T18:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.