Blockchain-Empowered Cyber-Secure Federated Learning for Trustworthy Edge Computing
- URL: http://arxiv.org/abs/2412.20674v1
- Date: Mon, 30 Dec 2024 02:58:18 GMT
- Title: Blockchain-Empowered Cyber-Secure Federated Learning for Trustworthy Edge Computing
- Authors: Ervin Moore, Ahmed Imteaj, Md Zarif Hossain, Shabnam Rezapour, M. Hadi Amini,
- Abstract summary: Federated Learning (FL) is a privacy-preserving distributed machine learning scheme.
The FL paradigm may become vulnerable due to an active attack from the network participants, called a poisonous attack.
This paper presents a cross-device FL model that ensures trustworthiness, fairness, and authenticity in the underlying FL training process.
- Score: 0.36700088931938835
- License:
- Abstract: Federated Learning (FL) is a privacy-preserving distributed machine learning scheme, where each participant data remains on the participating devices and only the local model generated utilizing the local computational power is transmitted throughout the database. However, the distributed computational nature of FL creates the necessity to develop a mechanism that can remotely trigger any network agents, track their activities, and prevent threats to the overall process posed by malicious participants. Particularly, the FL paradigm may become vulnerable due to an active attack from the network participants, called a poisonous attack. In such an attack, the malicious participant acts as a benign agent capable of affecting the global model quality by uploading an obfuscated poisoned local model update to the server. This paper presents a cross-device FL model that ensures trustworthiness, fairness, and authenticity in the underlying FL training process. We leverage trustworthiness by constructing a reputation-based trust model based on contributions of agents toward model convergence. We ensure fairness by identifying and removing malicious agents from the training process through an outlier detection technique. Further, we establish authenticity by generating a token for each participating device through a distributed sensing mechanism and storing that unique token in a blockchain smart contract. Further, we insert the trust scores of all agents into a blockchain and validate their reputations using various consensus mechanisms that consider the computational task.
Related papers
- Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks [2.28438857884398]
Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges.
This study proposes an innovative security framework inspired by Control-Flow (CFA) mechanisms, traditionally used in cybersecurity.
We authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference.
arXiv Detail & Related papers (2024-03-15T04:03:34Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks [8.866045560761528]
Federated learning (FL) is a distributed learning process that allows multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data.
Recent research has demonstrated the effectiveness of inference and poisoning attacks on FL.
We present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks.
arXiv Detail & Related papers (2023-10-03T14:55:30Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - A Secure Federated Learning Framework for 5G Networks [44.40119258491145]
Federated Learning (FL) has been proposed as an emerging paradigm to build machine learning models using distributed training datasets.
There are two critical security threats: poisoning and membership inference attacks.
We propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL.
arXiv Detail & Related papers (2020-05-12T13:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.