Free-rider Attacks on Model Aggregation in Federated Learning
- URL: http://arxiv.org/abs/2006.11901v5
- Date: Mon, 22 Feb 2021 14:33:08 GMT
- Title: Free-rider Attacks on Model Aggregation in Federated Learning
- Authors: Yann Fraboni, Richard Vidal, Marco Lorenzi
- Abstract summary: We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation.
We provide formal guarantees for these attacks to converge to the aggregated models of the fair participants.
We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning.
- Score: 10.312968200748116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Free-rider attacks against federated learning consist in dissimulating
participation to the federated learning process with the goal of obtaining the
final aggregated model without actually contributing with any data. This kind
of attacks is critical in sensitive applications of federated learning, where
data is scarce and the model has high commercial value. We introduce here the
first theoretical and experimental analysis of free-rider attacks on federated
learning schemes based on iterative parameters aggregation, such as FedAvg or
FedProx, and provide formal guarantees for these attacks to converge to the
aggregated models of the fair participants. We first show that a
straightforward implementation of this attack can be simply achieved by not
updating the local parameters during the iterative federated optimization. As
this attack can be detected by adopting simple countermeasures at the server
level, we subsequently study more complex disguising schemes based on
stochastic updates of the free-rider parameters. We demonstrate the proposed
strategies on a number of experimental scenarios, in both iid and non-iid
settings. We conclude by providing recommendations to avoid free-rider attacks
in real world applications of federated learning, especially in sensitive
domains where security of data and models is critical.
Related papers
- Attribute Inference Attacks for Federated Regression Tasks [14.152503562997662]
Federated Learning (FL) enables clients to collaboratively train a global machine learning model while keeping their data localized.
Recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks.
We propose novel model-based AIAs specifically designed for regression tasks in FL environments.
arXiv Detail & Related papers (2024-11-19T18:06:06Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Model Hijacking Attack in Federated Learning [19.304332176437363]
HijackFL is the first-of-its-kind hijacking attack against the global model in federated learning.
It aims to force the global model to perform a different task from its original task without the server or benign client noticing.
We conduct extensive experiments on four benchmark datasets and three popular models.
arXiv Detail & Related papers (2024-08-04T20:02:07Z) - FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks [62.897993591443594]
FullCert is the first end-to-end certifier with sound, deterministic bounds.
We experimentally demonstrate FullCert's feasibility on two datasets.
arXiv Detail & Related papers (2024-06-17T13:23:52Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Mitigating Data Injection Attacks on Federated Learning [20.24380409762923]
Federated learning is a technique that allows multiple entities to collaboratively train models using their data.
Despite its advantages, federated learning can be susceptible to false data injection attacks.
We propose a novel technique to detect and mitigate data injection attacks on federated learning systems.
arXiv Detail & Related papers (2023-12-04T18:26:31Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - ARFED: Attack-Resistant Federated averaging based on outlier elimination [0.0]
In federated learning, each participant trains its local model with its own data and a global model is formed at a trusted server.
Since the server has no effect and visibility on the training procedure of the participants to ensure privacy, the global model becomes vulnerable to attacks such as data poisoning and model poisoning.
We propose a defense algorithm called ARFED that does not make any assumptions about data distribution, update similarity of participants, or the ratio of the malicious participants.
arXiv Detail & Related papers (2021-11-08T15:00:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.