DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg
- URL: http://arxiv.org/abs/2204.02632v1
- Date: Wed, 6 Apr 2022 07:20:31 GMT
- Title: DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg
- Authors: Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
- Abstract summary: We propose Decentralized Federated Trusted Averaging (DeFTA) as a plug-and-play replacement for FedAvg.
DeFTA brings better security, scalability, and fault-tolerance to the federated learning process after installation.
- Score: 28.255536979484518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is identified as a crucial enabler for large-scale
distributed machine learning (ML) without the need for local raw dataset
sharing, substantially reducing privacy concerns and alleviating the isolated
data problem. In reality, the prosperity of FL is largely due to a centralized
framework called FedAvg, in which workers are in charge of model training and
servers are in control of model aggregation. However, FedAvg's centralized
worker-server architecture has raised new concerns, be it the low scalability
of the cluster, the risk of data leakage, and the failure or even defection of
the central server. To overcome these problems, we propose Decentralized
Federated Trusted Averaging (DeFTA), a decentralized FL framework that serves
as a plug-and-play replacement for FedAvg, instantly bringing better security,
scalability, and fault-tolerance to the federated learning process after
installation. In principle, it fundamentally resolves the above-mentioned
issues from an architectural perspective without compromises or tradeoffs,
primarily consisting of a new model aggregating formula with theoretical
performance analysis, and a decentralized trust system (DTS) to greatly improve
system robustness. Note that since DeFTA is an alternative to FedAvg at the
framework level, \textit{prevalent algorithms published for FedAvg can be also
utilized in DeFTA with ease}. Extensive experiments on six datasets and six
basic models suggest that DeFTA not only has comparable performance with FedAvg
in a more realistic setting, but also achieves great resilience even when 66%
of workers are malicious. Furthermore, we also present an asynchronous variant
of DeFTA to endow it with more powerful usability.
Related papers
- Scaling Decentralized Learning with FLock [14.614054170542966]
This paper introduces FLock, a decentralized framework for fine-tuning large language models (LLMs)<n> Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties.<n>Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FLs.
arXiv Detail & Related papers (2025-07-21T08:01:43Z) - Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense Framework [0.6554326244334868]
Federated Learning (FL) enables collaborative model training across decentralized devices without sharing raw data.
We propose a privacy-preserving defense framework that leverages a Conditional Generative Adversarial Network (cGAN) to generate synthetic data at the server for authenticating client updates.
arXiv Detail & Related papers (2025-03-26T18:00:56Z) - Decentralized and Robust Privacy-Preserving Model Using Blockchain-Enabled Federated Deep Learning in Intelligent Enterprises [0.5461938536945723]
We propose FedAnil, a secure blockchain enabled Federated Deep Learning Model.
It improves enterprise models decentralization, performance, and tamper proof properties.
Extensive experiments were conducted using the Sent140, FashionMNIST, FEMNIST, and CIFAR10 new real world datasets.
arXiv Detail & Related papers (2025-02-18T15:17:25Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Byzantine-Robust Decentralized Federated Learning [30.33876141358171]
Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private data.
Decentralized learning (DFL) architecture has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner.
DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients.
We propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL.
arXiv Detail & Related papers (2024-06-14T21:28:37Z) - FedCore: Straggler-Free Federated Learning with Distributed Coresets [12.508327794236209]
FedCore is an algorithm that tackles the straggler problem via the decentralized selection of coresets.
It translates the coreset optimization problem into a more tractable k-medoids clustering problem and operates distributedly on each client.
Theoretical analysis confirms FedCore's convergence, and practical evaluations demonstrate an 8x reduction in FL training time.
arXiv Detail & Related papers (2024-01-31T22:40:49Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedRFQ: Prototype-Based Federated Learning with Reduced Redundancy,
Minimal Failure, and Enhanced Quality [41.88338945821504]
FedRFQ is a prototype-based federated learning approach that aims to reduce redundancy, minimize failures, and improve underlinequality.
We introduce the BFT-detect, a BFT (Byzantine Fault Tolerance) detectable aggregation algorithm, to ensure the security of FedRFQ against poisoning attacks and server malfunctions.
arXiv Detail & Related papers (2024-01-15T09:50:27Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Advancing Federated Learning in 6G: A Trusted Architecture with
Graph-based Analysis [6.192092124154705]
Federated Learning (FL) is a potential paradigm, facilitating decentralized AI model training across a diverse range of devices under the coordination of a central server.
This work proposes a trusted architecture for supporting FL, which utilizes Distributed Ledger Technology (DLT) and Graph Neural Network (GNN)
The feasibility of the novel architecture is validated through simulations, demonstrating improved performance in anomalous model detection and global model accuracy compared to relevant baselines.
arXiv Detail & Related papers (2023-09-11T15:10:41Z) - FedWon: Triumphing Multi-domain Federated Learning Without Normalization [50.49210227068574]
Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients.
However, Federated learning (FL) encounters challenges due to non-independent and identically distributed (non-i.i.d) data.
We propose a novel method called Federated learning Without normalizations (FedWon) to address the multi-domain problem in FL.
arXiv Detail & Related papers (2023-06-09T13:18:50Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Separation of Powers in Federated Learning [5.966064140042439]
Federated Learning (FL) enables collaborative training among mutually distrusting parties.
Recent attacks have reconstructed large fractions of training data from ostensibly "sanitized" model updates.
We introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture.
arXiv Detail & Related papers (2021-05-19T21:00:44Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.