Decentralized and Robust Privacy-Preserving Model Using Blockchain-Enabled Federated Deep Learning in Intelligent Enterprises
- URL: http://arxiv.org/abs/2502.17485v1
- Date: Tue, 18 Feb 2025 15:17:25 GMT
- Title: Decentralized and Robust Privacy-Preserving Model Using Blockchain-Enabled Federated Deep Learning in Intelligent Enterprises
- Authors: Reza Fotohi, Fereidoon Shams Aliee, Bahar Farahani,
- Abstract summary: We propose FedAnil, a secure blockchain enabled Federated Deep Learning Model.<n>It improves enterprise models decentralization, performance, and tamper proof properties.<n>Extensive experiments were conducted using the Sent140, FashionMNIST, FEMNIST, and CIFAR10 new real world datasets.
- Score: 0.5461938536945723
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In Federated Deep Learning (FDL), multiple local enterprises are allowed to train a model jointly. Then, they submit their local updates to the central server, and the server aggregates the updates to create a global model. However, trained models usually perform worse than centralized models, especially when the training data distribution is non-independent and identically distributed (nonIID). NonIID data harms the accuracy and performance of the model. Additionally, due to the centrality of federated learning (FL) and the untrustworthiness of enterprises, traditional FL solutions are vulnerable to security and privacy attacks. To tackle this issue, we propose FedAnil, a secure blockchain enabled Federated Deep Learning Model that improves enterprise models decentralization, performance, and tamper proof properties, incorporating two main phases. The first phase addresses the nonIID challenge (label and feature distribution skew). The second phase addresses security and privacy concerns against poisoning and inference attacks through three steps. Extensive experiments were conducted using the Sent140, FashionMNIST, FEMNIST, and CIFAR10 new real world datasets to evaluate FedAnils robustness and performance. The simulation results demonstrate that FedAnil satisfies FDL privacy preserving requirements. In terms of convergence analysis, the model parameter obtained with FedAnil converges to the optimum of the model parameter. In addition, it performs better in terms of accuracy (more than 11, 15, and 24%) and computation overhead (less than 8, 10, and 15%) compared with baseline approaches, namely ShieldFL, RVPFL, and RFA.
Related papers
- Scaling Decentralized Learning with FLock [14.614054170542966]
This paper introduces FLock, a decentralized framework for fine-tuning large language models (LLMs)<n> Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties.<n>Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FLs.
arXiv Detail & Related papers (2025-07-21T08:01:43Z) - A Lightweight and Secure Deep Learning Model for Privacy-Preserving Federated Learning in Intelligent Enterprises [0.5461938536945723]
In intelligent enterprises, machine learning based models are adopted to extract insights from data.
FedAnil+ is a novel, lightweight, and secure Federated Deep Learning Model that includes three main phases.
Experiment results validate that FedAnil+ is secure against inference and poisoning attacks with better accuracy.
arXiv Detail & Related papers (2025-03-03T19:51:13Z) - Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning [19.416193229599852]
Malicious users can create malicious model parameters to compromise the models predictive and generative capabilities.<n>We propose a novel model poisoning attack method named Delphi which aims to maximise the uncertainty of the global model output.<n> numerically results demonstrate that Delphi-BO induces a higher amount of uncertainty than Delphi-LSTR highlighting vulnerability of FL systems to model poisoning attacks.
arXiv Detail & Related papers (2025-01-14T10:46:41Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.