FLoBC: A Decentralized Blockchain-Based Federated Learning Framework
- URL: http://arxiv.org/abs/2112.11873v1
- Date: Wed, 22 Dec 2021 13:36:49 GMT
- Title: FLoBC: A Decentralized Blockchain-Based Federated Learning Framework
- Authors: Mohamed Ghanem, Fadi Dawoud, Habiba Gamal, Eslam Soliman, Hossam
Sharara, Tamer El-Batt
- Abstract summary: In this work, we demonstrate our solution for building a generic decentralized federated learning system using blockchain technology.
We present our system design comprising the two decentralized actors: trainer and validator, alongside our methodology for ensuring reliable and efficient operation.
Finally, we utilize FLoBC as an experimental sandbox to compare and contrast the effects of trainer-to-validator ratio, reward-penalty policy, and model synchronization schemes on the overall system performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid expansion of data worldwide invites the need for more distributed
solutions in order to apply machine learning on a much wider scale. The
resultant distributed learning systems can have various degrees of
centralization. In this work, we demonstrate our solution FLoBC for building a
generic decentralized federated learning system using blockchain technology,
accommodating any machine learning model that is compatible with gradient
descent optimization. We present our system design comprising the two
decentralized actors: trainer and validator, alongside our methodology for
ensuring reliable and efficient operation of said system. Finally, we utilize
FLoBC as an experimental sandbox to compare and contrast the effects of
trainer-to-validator ratio, reward-penalty policy, and model synchronization
schemes on the overall system performance, ultimately showing by example that a
decentralized federated learning system is indeed a feasible alternative to
more centralized architectures.
Related papers
- Fantastyc: Blockchain-based Federated Learning Made Secure and Practical [0.7083294473439816]
Federated Learning is a decentralized framework that enables clients to collaboratively train a machine learning model under the orchestration of a central server without sharing their local data.
The centrality of this framework represents a point of failure which is addressed in literature by blockchain-based federated learning approaches.
We propose Fantastyc, a solution designed to address these challenges that have been never met together in the state of the art.
arXiv Detail & Related papers (2024-06-05T20:01:49Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Scheduling and Communication Schemes for Decentralized Federated
Learning [0.31410859223862103]
A decentralized federated learning (DFL) model with the gradient descent (SGD) algorithm has been introduced.
Three scheduling policies for DFL have been proposed for communications between the clients and the parallel servers.
Results show that the proposed scheduling polices have an impact both on the speed of convergence and in the final global model.
arXiv Detail & Related papers (2023-11-27T17:35:28Z) - Enhancing Scalability and Reliability in Semi-Decentralized Federated
Learning With Blockchain: Trust Penalization and Asynchronous Functionality [0.0]
The paper focuses on enhancing the trustworthiness of participating nodes through a trust penalization mechanism.
The proposed system aims to create a fair, secure and transparent environment for collaborative machine learning without compromising data privacy.
arXiv Detail & Related papers (2023-10-30T06:05:50Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Decentralized Training of Foundation Models in Heterogeneous
Environments [77.47261769795992]
Training foundation models, such as GPT-3 and PaLM, can be extremely expensive.
We present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network.
arXiv Detail & Related papers (2022-06-02T20:19:51Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - BAGUA: Scaling up Distributed Learning with System Relaxations [31.500494636704598]
BAGUA is a new communication framework for distributed data-parallel training.
Powered by the new system design, BAGUA has a great ability to implement and extend various state-of-the-art distributed learning algorithms.
In a production cluster with up to 16 machines, BAGUA can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time.
arXiv Detail & Related papers (2021-07-03T21:27:45Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.