ARFED: Attack-Resistant Federated averaging based on outlier elimination
- URL: http://arxiv.org/abs/2111.04550v2
- Date: Fri, 16 Jun 2023 10:49:38 GMT
- Title: ARFED: Attack-Resistant Federated averaging based on outlier elimination
- Authors: Ece Isik-Polat, Gorkem Polat, Altan Kocyigit
- Abstract summary: In federated learning, each participant trains its local model with its own data and a global model is formed at a trusted server.
Since the server has no effect and visibility on the training procedure of the participants to ensure privacy, the global model becomes vulnerable to attacks such as data poisoning and model poisoning.
We propose a defense algorithm called ARFED that does not make any assumptions about data distribution, update similarity of participants, or the ratio of the malicious participants.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In federated learning, each participant trains its local model with its own
data and a global model is formed at a trusted server by aggregating model
updates coming from these participants. Since the server has no effect and
visibility on the training procedure of the participants to ensure privacy, the
global model becomes vulnerable to attacks such as data poisoning and model
poisoning. Although many defense algorithms have recently been proposed to
address these attacks, they often make strong assumptions that do not agree
with the nature of federated learning, such as assuming Non-IID datasets.
Moreover, they mostly lack comprehensive experimental analyses. In this work,
we propose a defense algorithm called ARFED that does not make any assumptions
about data distribution, update similarity of participants, or the ratio of the
malicious participants. ARFED mainly considers the outlier status of
participant updates for each layer of the model architecture based on the
distance to the global model. Hence, the participants that do not have any
outlier layer are involved in model aggregation. We have performed extensive
experiments on diverse scenarios and shown that the proposed approach provides
a robust defense against different attacks. To test the defense capability of
the ARFED in different conditions, we considered label flipping, Byzantine, and
partial knowledge attacks for both IID and Non-IID settings in our experimental
evaluations. Moreover, we proposed a new attack, called organized partial
knowledge attack, where malicious participants use their training statistics
collaboratively to define a common poisoned model. We have shown that organized
partial knowledge attacks are more effective than independent attacks.
Related papers
- EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning [3.699715556687871]
Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data.
FL models can suffer from biases against certain demographic groups due to the heterogeneity of data and party selection.
We propose a new type of model poisoning attack, EAB-FL, with a focus on exacerbating group unfairness while maintaining a good level of model utility.
arXiv Detail & Related papers (2024-10-02T21:22:48Z) - Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey [28.88186038735176]
Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
arXiv Detail & Related papers (2023-12-14T05:52:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Membership-Doctor: Comprehensive Assessment of Membership Inference
Against Machine Learning Models [11.842337448801066]
We present a large-scale measurement of different membership inference attacks and defenses.
We find that some assumptions of the threat model, such as same-architecture and same-distribution between shadow and target models, are unnecessary.
We are also the first to execute attacks on the real-world data collected from the Internet, instead of laboratory datasets.
arXiv Detail & Related papers (2022-08-22T17:00:53Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Free-rider Attacks on Model Aggregation in Federated Learning [10.312968200748116]
We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation.
We provide formal guarantees for these attacks to converge to the aggregated models of the fair participants.
We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning.
arXiv Detail & Related papers (2020-06-21T20:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.