FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks
- URL: http://arxiv.org/abs/2307.04420v2
- Date: Tue, 19 Nov 2024 05:51:39 GMT
- Title: FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks
- Authors: Youquan Xian, Xiaoyun Gan, Chuanjian Yao, Dongcheng Li, Peng Wang, Peng Liu, Ying Zhao,
- Abstract summary: Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
- Score: 5.914766366715661
- License:
- Abstract: Federated Learning (FL), as a privacy-preserving machine learning paradigm, trains a global model across devices without exposing local data. However, resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training. In this paper, we propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT). Firstly, we design a dynamic tiering strategy that dynamically partitions devices into different tiers based on their response times and assigns specific timeout thresholds to each tier to reduce single-round training time. Then, we propose a cross-tier device selection algorithm that selects devices that respond quickly and are conducive to model convergence to improve convergence efficiency and accuracy. Experimental results demonstrate that the proposed approach under wireless networks outperforms the baseline approach, with an average reduction of 54.7\% in convergence time and an average improvement of 1.83\% in convergence accuracy.
Related papers
- Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection [13.825031686864559]
Federated Learning (FL) is a decentralized approach for collaborative model training on edge devices.
We present a method to improve convergence in FL by creating a global subset of data on the server and dynamically distributing it across devices.
Our approach results in a substantial accuracy boost of approximately 5% for the MNIST dataset, around 18% for CIFAR-10, and 20% for CIFAR-100 with a 10% global subset of data, outperforming the state-of-the-art (SOTA) aggregation algorithms.
arXiv Detail & Related papers (2024-10-23T11:47:04Z) - NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous Settings via Neural Tangent Kernel [27.92271597111756]
Decentralized federated learning (DFL) is a collaborative machine learning framework for training a model across participants without a central server or raw data exchange.
Recent work has shown that the neural tangent kernel (NTK) approach, when applied to federated learning in a centralized framework, can lead to improved performance.
We propose an approach leveraging the NTK to train client models in the decentralized setting, while introducing a synergy between NTK-based evolution and model averaging.
arXiv Detail & Related papers (2024-10-02T18:19:28Z) - DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency
for Federated Learning with Static and Streaming Dataset [23.11152686493894]
Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private data.
This paper introduces novel analytical models and optimization algorithms that leverage the interplay between batch size and aggregation frequency to navigate the trade-offs among convergence, cost, and completion time for dynamic FL training.
arXiv Detail & Related papers (2023-10-20T08:36:12Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.