BlockFLA: Accountable Federated Learning via Hybrid Blockchain
Architecture
- URL: http://arxiv.org/abs/2010.07427v1
- Date: Wed, 14 Oct 2020 22:43:39 GMT
- Title: BlockFLA: Accountable Federated Learning via Hybrid Blockchain
Architecture
- Authors: Harsh Bimal Desai, Mustafa Safa Ozdayi, Murat Kantarcioglu
- Abstract summary: Federated Learning (FL) is a distributed, and decentralized machine learning protocol.
It has been shown that an attacker can inject backdoors to the trained model during FL.
We develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers.
- Score: 11.908715869667445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed, and decentralized machine learning
protocol. By executing FL, a set of agents can jointly train a model without
sharing their datasets with each other, or a third-party. This makes FL
particularly suitable for settings where data privacy is desired.
At the same time, concealing training data gives attackers an opportunity to
inject backdoors into the trained model. It has been shown that an attacker can
inject backdoors to the trained model during FL, and then can leverage the
backdoor to make the model misclassify later. Several works tried to alleviate
this threat by designing robust aggregation functions. However, given more
sophisticated attacks are developed over time, which by-pass the existing
defenses, we approach this problem from a complementary angle in this work.
Particularly, we aim to discourage backdoor attacks by detecting, and punishing
the attackers, possibly after the end of training phase.
To this end, we develop a hybrid blockchain-based FL framework that uses
smart contracts to automatically detect, and punish the attackers via monetary
penalties. Our framework is general in the sense that, any aggregation
function, and any attacker detection algorithm can be plugged into it. We
conduct experiments to demonstrate that our framework preserves the
communication-efficient nature of FL, and provide empirical results to
illustrate that it can successfully penalize attackers by leveraging our novel
attacker detection algorithm.
Related papers
- Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration [8.191214701984162]
Federated learning (FL) enables decentralized model training while preserving privacy.
Recently, integrating Foundation Models (FMs) into FL has boosted performance but also introduced a novel backdoor attack mechanism.
We propose a novel data-free defense strategy by constraining abnormal activations in the hidden feature space during model aggregation on the server.
arXiv Detail & Related papers (2024-10-23T05:54:41Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated Learning [16.04315589280155]
We propose Venomancer, an effective backdoor attack that is imperceptible and allows target-on-demand.
The method is robust against state-of-the-art defenses such as Norm Clipping, Weak DP, Krum, Multi-Krum, RLR, FedRAD, Deepsight, and RFLBAT.
arXiv Detail & Related papers (2024-07-03T14:22:51Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis [49.38856542573576]
Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.
In this work, we empirically demonstrate that Lottery Ticket models are equally vulnerable to backdoor attacks as the original dense models.
arXiv Detail & Related papers (2021-09-22T04:19:59Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.