Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation
- URL: http://arxiv.org/abs/2101.06905v1
- Date: Mon, 18 Jan 2021 07:19:08 GMT
- Title: Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation
- Authors: Jun Li, Yumeng Shao, Kang Wei, Ming Ding, Chuan Ma, Long Shi, Zhu Han,
and H. Vincent Poor
- Abstract summary: We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
- Score: 119.19061102064497
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated learning (FL), as a distributed machine learning paradigm, promotes
personal privacy by clients' processing raw data locally. However, relying on a
centralized server for model aggregation, standard FL is vulnerable to server
malfunctions, untrustworthy server, and external attacks. To address this
issue, we propose a decentralized FL framework by integrating blockchain into
FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In
a round of the proposed BLADE-FL, each client broadcasts its trained model to
other clients, competes to generate a block based on the received models, and
then aggregates the models from the generated block before its local training
of the next round. We evaluate the learning performance of BLADE-FL, and
develop an upper bound on the global loss function. Then we verify that this
bound is convex with respect to the number of overall rounds K, and optimize
the computing resource allocation for minimizing the upper bound. We also note
that there is a critical problem of training deficiency, caused by lazy clients
who plagiarize others' trained models and add artificial noises to disguise
their cheating behaviors. Focusing on this problem, we explore the impact of
lazy clients on the learning performance of BLADE-FL, and characterize the
relationship among the optimal K, the learning parameters, and the proportion
of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we show that
the experimental results are consistent with the analytical ones. To be
specific, the gap between the developed upper bound and experimental results is
lower than 5%, and the optimized K based on the upper bound can effectively
minimize the loss function.
Related papers
- Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.
We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.
Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning [13.144501509175985]
We propose a TRust-Aware clIent scheduLing mechanism called TRAIL, which assesses client states and contributions.
We focus on a semi-decentralized FL framework where edge servers and clients train a shared global model using unreliable intra-cluster model aggregation and inter-cluster model consensus.
Experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
arXiv Detail & Related papers (2024-12-16T05:02:50Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - zkFL: Zero-Knowledge Proof-based Gradient Aggregation for Federated Learning [13.086807746204597]
Federated learning (FL) is a machine learning paradigm, which enables multiple and decentralized clients to collaboratively train a model under the orchestration of a central aggregator.
Traditional FL relies on the trust assumption of the central aggregator, which forms cohorts of clients honestly.
We introduce zkFL, which leverages zero-knowledge proofs to tackle the issue of a malicious aggregator during the training model aggregation process.
arXiv Detail & Related papers (2023-10-04T03:24:33Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Incentive Mechanism Design for Joint Resource Allocation in
Blockchain-based Federated Learning [23.64441447666488]
We propose an incentive mechanism to assign each client appropriate rewards for training and mining.
We transform the Stackelberg game model into two optimization problems, which are sequentially solved to derive the optimal strategies for both the model owner and clients.
arXiv Detail & Related papers (2022-02-18T02:19:26Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.