Scaling Decentralized Learning with FLock
- URL: http://arxiv.org/abs/2507.15349v1
- Date: Mon, 21 Jul 2025 08:01:43 GMT
- Title: Scaling Decentralized Learning with FLock
- Authors: Zehua Cheng, Rui Sun, Jiahao Sun, Yike Guo,
- Abstract summary: This paper introduces FLock, a decentralized framework for fine-tuning large language models (LLMs)<n> Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties.<n>Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FLs.
- Score: 14.614054170542966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning the large language models (LLMs) are prevented by the deficiency of centralized control and the massive computing and communication overhead on the decentralized schemes. While the typical standard federated learning (FL) supports data privacy, the central server requirement creates a single point of attack and vulnerability to poisoning attacks. Generalizing the result in this direction to 70B-parameter models in the heterogeneous, trustless environments has turned out to be a huge, yet unbroken bottleneck. This paper introduces FLock, a decentralized framework for secure and efficient collaborative LLM fine-tuning. Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties. We present the first empirical validation of fine-tuning a 70B LLM in a secure, multi-domain, decentralized setting. Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FL optimizers and fosters synergistic knowledge transfer. The resulting models show a >68% reduction in adversarial attack success rates. The global model also demonstrates superior cross-domain generalization, outperforming models trained in isolation on their own specialized data.
Related papers
- Decentralized and Robust Privacy-Preserving Model Using Blockchain-Enabled Federated Deep Learning in Intelligent Enterprises [0.5461938536945723]
We propose FedAnil, a secure blockchain enabled Federated Deep Learning Model.<n>It improves enterprise models decentralization, performance, and tamper proof properties.<n>Extensive experiments were conducted using the Sent140, FashionMNIST, FEMNIST, and CIFAR10 new real world datasets.
arXiv Detail & Related papers (2025-02-18T15:17:25Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - FLock: Defending Malicious Behaviors in Federated Learning with
Blockchain [3.0111384920731545]
Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models.
We propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized FL system built on blockchain.
arXiv Detail & Related papers (2022-11-05T06:14:44Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg [28.255536979484518]
We propose Decentralized Federated Trusted Averaging (DeFTA) as a plug-and-play replacement for FedAvg.
DeFTA brings better security, scalability, and fault-tolerance to the federated learning process after installation.
arXiv Detail & Related papers (2022-04-06T07:20:31Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.