Blockchain-enabled Server-less Federated Learning
- URL: http://arxiv.org/abs/2112.07938v1
- Date: Wed, 15 Dec 2021 07:41:23 GMT
- Title: Blockchain-enabled Server-less Federated Learning
- Authors: Francesc Wilhelmi, Lorenza Giupponi, Paolo Dini
- Abstract summary: We focus on an asynchronous server-less Federated Learning solution empowered by (BC) technology.
In contrast to mostly adopted FL approaches, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates.
- Score: 5.065631761462706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the heterogeneous nature of devices participating in large-scale
Federated Learning (FL) optimization, we focus on an asynchronous server-less
FL solution empowered by Blockchain (BC) technology. In contrast to mostly
adopted FL approaches, which assume synchronous operation, we advocate an
asynchronous method whereby model aggregation is done as clients submit their
local updates. The asynchronous setting fits well with the federated
optimization idea in practical large-scale settings with heterogeneous clients.
Thus, it potentially leads to higher efficiency in terms of communication
overhead and idle periods. To evaluate the learning completion delay of
BC-enabled FL, we provide an analytical model based on batch service queue
theory. Furthermore, we provide simulation results to assess the performance of
both synchronous and asynchronous mechanisms. Important aspects involved in the
BC-enabled FL optimization, such as the network size, link capacity, or user
requirements, are put together and analyzed. As our results show, the
synchronous setting leads to higher prediction accuracy than the asynchronous
case. Nevertheless, asynchronous federated optimization provides much lower
latency in many cases, thus becoming an appealing FL solution when dealing with
large data sets, tough timing constraints (e.g., near-real-time applications),
or highly varying training data.
Related papers
- Optimizing Asynchronous Federated Learning: A Delicate Trade-Off Between Model-Parameter Staleness and Update Frequency [0.9999629695552195]
We use gradient modeling to better understand the impact of design choices in asynchronous FL algorithms.
We characterize in particular a fundamental trade-off for optimizing asynchronous FL.
We show that these optimizations enhance accuracy by 10% to 30%.
arXiv Detail & Related papers (2025-02-12T08:38:13Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - Momentum Approximation in Asynchronous Private Federated Learning [24.325330433282808]
We propose momentum approximation that minimizes the bias by finding an optimal weighted average of all historical model updates.
We empirically demonstrate that on benchmark FL datasets, momentum approximation can achieve $1.15 textrm--4times$ speed up in convergence.
arXiv Detail & Related papers (2024-02-14T15:35:53Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Efficient and Light-Weight Federated Learning via Asynchronous
Distributed Dropout [22.584080337157168]
Asynchronous learning protocols have regained attention lately, especially in the Federated Learning (FL) setup.
We propose textttAsyncDrop, a novel asynchronous FL framework that utilizes dropout regularization to handle device heterogeneity in distributed settings.
Overall, textttAsyncDrop achieves better performance compared to state of the art asynchronous methodologies.
arXiv Detail & Related papers (2022-10-28T13:00:29Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.