DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
- URL: http://arxiv.org/abs/2308.13158v1
- Date: Fri, 25 Aug 2023 03:35:29 GMT
- Title: DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
- Authors: Xiaofeng Xue, Haokun Mao and Qiong Li
- Abstract summary: Federated learning (FL) aims to collaboratively train a global model while ensuring client data privacy.
We propose DAG-ACFL, an asynchronous clustered FL framework based on directed acyclic graph distributed ledger techniques (DAG-DLT)
We evaluate the clustering and training performance of DAG-ACFL on multiple datasets and analyze its communication and storage costs.
- Score: 5.819679865834583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) aims to collaboratively train a global model while
ensuring client data privacy. However, FL faces challenges from the non-IID
data distribution among clients. Clustered FL (CFL) has emerged as a promising
solution, but most existing CFL frameworks adopt synchronous frameworks lacking
asynchrony. An asynchronous CFL framework called SDAGFL based on directed
acyclic graph distributed ledger techniques (DAG-DLT) was proposed, but its
complete decentralization leads to high communication and storage costs. We
propose DAG-ACFL, an asynchronous clustered FL framework based on directed
acyclic graph distributed ledger techniques (DAG-DLT). We first detail the
components of DAG-ACFL. A tip selection algorithm based on the cosine
similarity of model parameters is then designed to aggregate models from
clients with similar distributions. An adaptive tip selection algorithm
leveraging change-point detection dynamically determines the number of selected
tips. We evaluate the clustering and training performance of DAG-ACFL on
multiple datasets and analyze its communication and storage costs. Experiments
show the superiority of DAG-ACFL in asynchronous clustered FL. By combining
DAG-DLT with clustered FL, DAG-ACFL realizes robust, decentralized and private
model training with efficient performance.
Related papers
- Robust Model Aggregation for Heterogeneous Federated Learning: Analysis and Optimizations [35.58487905412915]
We propose a time-driven SFL (T-SFL) framework for heterogeneous systems.
To evaluate the learning performance of T-SFL, we provide an upper bound on the global loss function.
We develop a discriminative model selection algorithm that removes local models from clients whose number of iterations falls below a predetermined threshold.
arXiv Detail & Related papers (2024-05-11T11:55:26Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedClust: Optimizing Federated Learning on Non-IID Data through
Weight-Driven Client Clustering [28.057411252785176]
Federated learning (FL) is an emerging distributed machine learning paradigm enabling collaborative model training on decentralized devices without exposing their local data.
This paper proposes FedClust, a novel CFL approach leveraging correlations between local model weights and client data distributions.
arXiv Detail & Related papers (2024-03-07T01:50:36Z) - Convergence Analysis of Sequential Federated Learning on Heterogeneous Data [5.872735527071425]
There are two categories of methods in Federated Learning (FL) for joint training across multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) FL (SFL) where clients train in a sequential manner.
In this paper, we establish the convergence guarantees SFL on heterogeneous data is still lacking.
Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
arXiv Detail & Related papers (2023-11-06T14:48:51Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Stochastic Clustered Federated Learning [21.811496586350653]
This paper proposes StoCFL, a novel clustered federated learning approach for generic Non-IID issues.
In detail, StoCFL implements a flexible CFL framework that supports an arbitrary proportion of client participation and newly joined clients.
The results show that StoCFL could obtain promising cluster results even when the number of clusters is unknown.
arXiv Detail & Related papers (2023-03-02T01:39:16Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z) - Towards On-Device Federated Learning: A Direct Acyclic Graph-based
Blockchain Approach [2.9202274421296943]
This paper introduces a framework for empowering Federated Learning using Direct Acyclic Graph (DAG)-based blockchain systematically (DAG-FL)
Two algorithms DAG-FL Controlling and DAG-FL Updating are designed running on different nodes to elaborate the operation of DAG-FL consensus mechanism.
arXiv Detail & Related papers (2021-04-27T10:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.