Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients
- URL: http://arxiv.org/abs/2012.02044v1
- Date: Wed, 2 Dec 2020 12:18:27 GMT
- Title: Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients
- Authors: Jun Li, Yumeng Shao, Ming Ding, Chuan Ma, Kang Wei, Zhu Han and H.
Vincent Poor
- Abstract summary: We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
- Score: 124.48732110742623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL), as a distributed machine learning approach, has
drawn a great amount of attention in recent years. FL shows an inherent
advantage in privacy preservation, since users' raw data are processed locally.
However, it relies on a centralized server to perform model aggregation.
Therefore, FL is vulnerable to server malfunctions and external attacks. In
this paper, we propose a novel framework by integrating blockchain into FL,
namely, blockchain assisted decentralized federated learning (BLADE-FL), to
enhance the security of FL. The proposed BLADE-FL has a good performance in
terms of privacy preservation, tamper resistance, and effective cooperation of
learning. However, it gives rise to a new problem of training deficiency,
caused by lazy clients who plagiarize others' trained models and add artificial
noises to conceal their cheating behaviors. To be specific, we first develop a
convergence bound of the loss function with the presence of lazy clients and
prove that it is convex with respect to the total number of generated blocks
$K$. Then, we solve the convex problem by optimizing $K$ to minimize the loss
function. Furthermore, we discover the relationship between the optimal $K$,
the number of lazy clients, and the power of artificial noises used by lazy
clients. We conduct extensive experiments to evaluate the performance of the
proposed framework using the MNIST and Fashion-MNIST datasets. Our analytical
results are shown to be consistent with the experimental results. In addition,
the derived optimal $K$ achieves the minimum value of loss function, and in
turn the optimal accuracy performance.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - zkFL: Zero-Knowledge Proof-based Gradient Aggregation for Federated Learning [13.086807746204597]
Federated learning (FL) is a machine learning paradigm, which enables multiple and decentralized clients to collaboratively train a model under the orchestration of a central aggregator.
Traditional FL relies on the trust assumption of the central aggregator, which forms cohorts of clients honestly.
We introduce zkFL, which leverages zero-knowledge proofs to tackle the issue of a malicious aggregator during the training model aggregation process.
arXiv Detail & Related papers (2023-10-04T03:24:33Z) - FLock: Defending Malicious Behaviors in Federated Learning with
Blockchain [3.0111384920731545]
Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models.
We propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized FL system built on blockchain.
arXiv Detail & Related papers (2022-11-05T06:14:44Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.