FLoW3 -- Web3 Empowered Federated Learning
- URL: http://arxiv.org/abs/2312.05459v1
- Date: Sat, 9 Dec 2023 04:05:07 GMT
- Title: FLoW3 -- Web3 Empowered Federated Learning
- Authors: Venkata Raghava Kurada, Pallava Kumar Baruah,
- Abstract summary: Federated Learning is susceptible to various kinds of attacks like Data Poisoning, Model Poisoning and Man in the Middle attack.
validation is done through consensus by employing Novelty Detection and Snowball protocol.
System is realized by implementing in python and Foundry for smart contract development.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning is susceptible to various kinds of attacks like Data Poisoning, Model Poisoning and Man in the Middle attack. We perceive Federated Learning as a hierarchical structure, a federation of nodes with validators as the head. The process of validation is done through consensus by employing Novelty Detection and Snowball protocol, to identify valuable and relevant updates while filtering out potentially malicious or irrelevant updates, thus preventing Model Poisoning attacks. The opinion of the validators is recorded in blockchain and trust score is calculated. In case of lack of consensus, trust score is used to determine the impact of validators on the global model. A hyperparameter is introduced to guide the model generation process, either to rely on consensus or on trust score. This approach ensures transparency and reliability in the aggregation process and allows the global model to benefit from insights of most trusted nodes. In the training phase, the combination of IPFS , PGP encryption provides : a) secure and decentralized storage b) mitigates single point of failure making this system reliable and c) resilient against man in the middle attack. The system is realized by implementing in python and Foundry for smart contract development. Global Model is tested against data poisoning by flipping the labels and by introducing malicious nodes. Results found to be similar to that of Flower.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Sentinel: An Aggregation Function to Secure Decentralized Federated Learning [9.046402244232343]
Decentralized Federated Learning (DFL) emerges as an innovative paradigm to train collaborative models, addressing the single point of failure limitation.
Existing defense mechanisms have been designed for centralized FL and they do not adequately exploit the particularities of DFL.
This work introduces Sentinel, a defense strategy to counteract poisoning attacks in DFL.
arXiv Detail & Related papers (2023-10-12T07:45:18Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - BlockFLow: An Accountable and Privacy-Preserving Solution for Federated
Learning [2.0625936401496237]
BlockFLow is an accountable federated learning system that is fully decentralized and privacy-preserving.
Its primary goal is to reward agents proportional to the quality of their contribution while protecting the privacy of the underlying datasets and being resilient to malicious adversaries.
arXiv Detail & Related papers (2020-07-08T02:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.