FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks
- URL: http://arxiv.org/abs/2310.02113v1
- Date: Tue, 3 Oct 2023 14:55:30 GMT
- Title: FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks
- Authors: Jorge Castillo, Phillip Rieger, Hossein Fereidooni, Qian Chen, Ahmad
Sadeghi
- Abstract summary: Federated learning (FL) is a distributed learning process that allows multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data.
Recent research has demonstrated the effectiveness of inference and poisoning attacks on FL.
We present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks.
- Score: 8.866045560761528
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is a distributed learning process that uses a trusted
aggregation server to allow multiple parties (or clients) to collaboratively
train a machine learning model without having them share their private data.
Recent research, however, has demonstrated the effectiveness of inference and
poisoning attacks on FL. Mitigating both attacks simultaneously is very
challenging. State-of-the-art solutions have proposed the use of poisoning
defenses with Secure Multi-Party Computation (SMPC) and/or Differential Privacy
(DP). However, these techniques are not efficient and fail to address the
malicious intent behind the attacks, i.e., adversaries (curious servers and/or
compromised clients) seek to exploit a system for monetization purposes. To
overcome these limitations, we present a ledger-based FL framework known as
FLEDGE that allows making parties accountable for their behavior and achieve
reasonable efficiency for mitigating inference and poisoning attacks. Our
solution leverages crypto-currency to increase party accountability by
penalizing malicious behavior and rewarding benign conduct. We conduct an
extensive evaluation on four public datasets: Reddit, MNIST, Fashion-MNIST, and
CIFAR-10. Our experimental results demonstrate that (1) FLEDGE provides strong
privacy guarantees for model updates without sacrificing model utility; (2)
FLEDGE can successfully mitigate different poisoning attacks without degrading
the performance of the global model; and (3) FLEDGE offers unique reward
mechanisms to promote benign behavior during model training and/or model
aggregation.
Related papers
- Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLock: Defending Malicious Behaviors in Federated Learning with
Blockchain [3.0111384920731545]
Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models.
We propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized FL system built on blockchain.
arXiv Detail & Related papers (2022-11-05T06:14:44Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - BlockFLow: An Accountable and Privacy-Preserving Solution for Federated
Learning [2.0625936401496237]
BlockFLow is an accountable federated learning system that is fully decentralized and privacy-preserving.
Its primary goal is to reward agents proportional to the quality of their contribution while protecting the privacy of the underlying datasets and being resilient to malicious adversaries.
arXiv Detail & Related papers (2020-07-08T02:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.