A Fast Blockchain-based Federated Learning Framework with Compressed
Communications
- URL: http://arxiv.org/abs/2208.06095v1
- Date: Fri, 12 Aug 2022 03:04:55 GMT
- Title: A Fast Blockchain-based Federated Learning Framework with Compressed
Communications
- Authors: Laizhong Cui, Xiaoxin Su, Yipeng Zhou
- Abstract summary: Recently, blockchain-based federated learning (BFL) has attracted intensive research attention.
In this paper, we propose a fast-based BFL called BCFL to improve the training efficiency of BFL in reality.
- Score: 14.344080339573278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, blockchain-based federated learning (BFL) has attracted intensive
research attention due to that the training process is auditable and the
architecture is serverless avoiding the single point failure of the parameter
server in vanilla federated learning (VFL). Nevertheless, BFL tremendously
escalates the communication traffic volume because all local model updates
(i.e., changes of model parameters) obtained by BFL clients will be transmitted
to all miners for verification and to all clients for aggregation. In contrast,
the parameter server and clients in VFL only retain aggregated model updates.
Consequently, the huge communication traffic in BFL will inevitably impair the
training efficiency and hinder the deployment of BFL in reality. To improve the
practicality of BFL, we are among the first to propose a fast blockchain-based
communication-efficient federated learning framework by compressing
communications in BFL, called BCFL. Meanwhile, we derive the convergence rate
of BCFL with non-convex loss. To maximize the final model accuracy, we further
formulate the problem to minimize the training loss of the convergence rate
subject to a limited training time with respect to the compression rate and the
block generation rate, which is a bi-convex optimization problem and can be
efficiently solved. To the end, to demonstrate the efficiency of BCFL, we carry
out extensive experiments with standard CIFAR-10 and FEMNIST datasets. Our
experimental results not only verify the correctness of our analysis, but also
manifest that BCFL can remarkably reduce the communication traffic by 95-98% or
shorten the training time by 90-95% compared with BFL.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication-Efficient Vertical Federated Learning with Limited
Overlapping Samples [34.576230628844506]
We propose a vertical federated learning (VFL) framework called textbfone-shot VFL.
In our proposed framework, the clients only need to communicate with the server once or only a few times.
Our methods can improve the accuracy by more than 46.5% and reduce the communication cost by more than 330$times$ compared with state-of-the-art VFL methods.
arXiv Detail & Related papers (2023-03-28T19:30:23Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - How Much Does It Cost to Train a Machine Learning Model over Distributed
Data Sources? [4.222078489059043]
Federated learning allows devices to train a machine learning model without sharing their raw data.
Server-less FL approaches like gossip federated learning (GFL) and blockchain-enabled federated learning (BFL) have been proposed to mitigate these issues.
GFL is able to save the 18% of training time, the 68% of energy and the 51% of data to be shared with respect to the CFL solution, but it is not able to reach the level of accuracy of CFL.
BFL represents a viable solution for implementing decentralized learning with a higher level of security, at the cost of an extra energy usage and data sharing
arXiv Detail & Related papers (2022-09-15T08:13:40Z) - DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning [2.43923223501858]
Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML)
We propose DeFL, a novel decentralized weight aggregation framework for cross-silo FL.
DeFL eliminates the central server by aggregating weights on each participating node and weights of only the current training round are maintained and synchronized among all nodes.
arXiv Detail & Related papers (2022-08-01T13:36:49Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - FAIR-BFL: Flexible and Incentive Redesign for Blockchain-based Federated
Learning [19.463891024499773]
Vanilla Federated learning (FL) relies on the centralized global aggregation mechanism and assumes that all clients are honest.
This makes it a challenge for FL to alleviate the single point of failure and dishonest clients.
We design and evaluate a novel BFL framework, and resolve the identified challenges in vanilla BFL with greater flexibility and incentive mechanism called FAIR-BFL.
arXiv Detail & Related papers (2022-06-26T15:20:45Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach [54.29213445674221]
Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO)
The issue of BFL is that the mobile devices have energy and CPU constraints that may reduce the system lifetime and training efficiency.
We propose to use the Deep Reinforcement Learning (DRL) to derive the optimal decisions for theO.
arXiv Detail & Related papers (2020-04-08T16:29:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.