Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation
- URL: http://arxiv.org/abs/2101.06905v1
- Date: Mon, 18 Jan 2021 07:19:08 GMT
- Title: Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation
- Authors: Jun Li, Yumeng Shao, Kang Wei, Ming Ding, Chuan Ma, Long Shi, Zhu Han,
and H. Vincent Poor
- Abstract summary: We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
- Score: 119.19061102064497
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated learning (FL), as a distributed machine learning paradigm, promotes
personal privacy by clients' processing raw data locally. However, relying on a
centralized server for model aggregation, standard FL is vulnerable to server
malfunctions, untrustworthy server, and external attacks. To address this
issue, we propose a decentralized FL framework by integrating blockchain into
FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In
a round of the proposed BLADE-FL, each client broadcasts its trained model to
other clients, competes to generate a block based on the received models, and
then aggregates the models from the generated block before its local training
of the next round. We evaluate the learning performance of BLADE-FL, and
develop an upper bound on the global loss function. Then we verify that this
bound is convex with respect to the number of overall rounds K, and optimize
the computing resource allocation for minimizing the upper bound. We also note
that there is a critical problem of training deficiency, caused by lazy clients
who plagiarize others' trained models and add artificial noises to disguise
their cheating behaviors. Focusing on this problem, we explore the impact of
lazy clients on the learning performance of BLADE-FL, and characterize the
relationship among the optimal K, the learning parameters, and the proportion
of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we show that
the experimental results are consistent with the analytical ones. To be
specific, the gap between the developed upper bound and experimental results is
lower than 5%, and the optimized K based on the upper bound can effectively
minimize the loss function.
Related papers
- BRFL: A Blockchain-based Byzantine-Robust Federated Learning Model [8.19957400564017]
Federated learning, which stores data in distributed nodes and shares only model parameters, has gained significant attention for addressing this concern.
A challenge arises in federated learning due to the Byzantine Attack Problem, where malicious local models can compromise the global model's performance during aggregation.
This article proposes the integration of Byzantine-Robust Federated Learning (BRLF) model that combines federated learning with blockchain technology.
arXiv Detail & Related papers (2023-10-20T10:21:50Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - zkFL: Zero-Knowledge Proof-based Gradient Aggregation for Federated Learning [13.086807746204597]
Federated learning (FL) is a machine learning paradigm, which enables multiple and decentralized clients to collaboratively train a model under the orchestration of a central aggregator.
Traditional FL relies on the trust assumption of the central aggregator, which forms cohorts of clients honestly.
We introduce zkFL, which leverages zero-knowledge proofs to tackle the issue of a malicious aggregator during the training model aggregation process.
arXiv Detail & Related papers (2023-10-04T03:24:33Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Incentive Mechanism Design for Joint Resource Allocation in
Blockchain-based Federated Learning [23.64441447666488]
We propose an incentive mechanism to assign each client appropriate rewards for training and mining.
We transform the Stackelberg game model into two optimization problems, which are sequentially solved to derive the optimal strategies for both the model owner and clients.
arXiv Detail & Related papers (2022-02-18T02:19:26Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.