DAG-AFL:Directed Acyclic Graph-based Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2507.20571v1
- Date: Mon, 28 Jul 2025 07:06:56 GMT
- Title: DAG-AFL:Directed Acyclic Graph-based Asynchronous Federated Learning
- Authors: Shuaipeng Zhang, Lanju Kong, Yixin Zhang, Wei He, Yongqing Zheng, Han Yu, Lizhen Cui,
- Abstract summary: We propose Directed Acyclic Graph-based Asynchronous Federated Learning (DAG-AFL) framework.<n>We develop a tip selection algorithm that considers temporal freshness, node reachability and model accuracy, with a DAG-based trusted verification strategy.<n>Experiments on 3 datasets demonstrate that DAG-AFL significantly improves training efficiency and model accuracy by 22.7% and 6.5% on average.
- Score: 33.25534229267438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the distributed nature of federated learning (FL), the vulnerability of the global model and the need for coordination among many client devices pose significant challenges. As a promising decentralized, scalable and secure solution, blockchain-based FL methods have attracted widespread attention in recent years. However, traditional consensus mechanisms designed for Proof of Work (PoW) similar to blockchain incur substantial resource consumption and compromise the efficiency of FL, particularly when participating devices are wireless and resource-limited. To address asynchronous client participation and data heterogeneity in FL, while limiting the additional resource overhead introduced by blockchain, we propose the Directed Acyclic Graph-based Asynchronous Federated Learning (DAG-AFL) framework. We develop a tip selection algorithm that considers temporal freshness, node reachability and model accuracy, with a DAG-based trusted verification strategy. Extensive experiments on 3 benchmarking datasets against eight state-of-the-art approaches demonstrate that DAG-AFL significantly improves training efficiency and model accuracy by 22.7% and 6.5% on average, respectively.
Related papers
- Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.<n>We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.<n>Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - Asynchronous Federated Learning: A Scalable Approach for Decentralized Machine Learning [0.9208007322096533]
Federated Learning (FL) has emerged as a powerful paradigm for decentralized machine learning, enabling collaborative model training across diverse clients without sharing raw data.<n>Traditional FL approaches often face limitations in scalability and efficiency due to their reliance on synchronous client updates.<n>We propose an Asynchronous Federated Learning (AFL) algorithm, which allows clients to update the global model independently and asynchronously.
arXiv Detail & Related papers (2024-12-23T17:11:02Z) - Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning [9.084674176224109]
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy.
We introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning.
Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information.
arXiv Detail & Related papers (2024-10-17T22:47:19Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT [5.819679865834583]
Federated learning (FL) aims to collaboratively train a global model while ensuring client data privacy.
We propose DAG-ACFL, an asynchronous clustered FL framework based on directed acyclic graph distributed ledger techniques (DAG-DLT)
We evaluate the clustering and training performance of DAG-ACFL on multiple datasets and analyze its communication and storage costs.
arXiv Detail & Related papers (2023-08-25T03:35:29Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Towards On-Device Federated Learning: A Direct Acyclic Graph-based
Blockchain Approach [2.9202274421296943]
This paper introduces a framework for empowering Federated Learning using Direct Acyclic Graph (DAG)-based blockchain systematically (DAG-FL)
Two algorithms DAG-FL Controlling and DAG-FL Updating are designed running on different nodes to elaborate the operation of DAG-FL consensus mechanism.
arXiv Detail & Related papers (2021-04-27T10:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.