FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless
Communication Networks
- URL: http://arxiv.org/abs/2307.04420v1
- Date: Mon, 10 Jul 2023 08:54:07 GMT
- Title: FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless
Communication Networks
- Authors: Peng Liu, Youquan Xian, Chuanjian Yao, Xiaoyun Gan, Lianghaojie Zhou,
Jianyong Jiang, Dongcheng Li
- Abstract summary: Federated Learning (FL) enables the training of a global model among clients without exposing local data.
We propose a novel dynamic cross-tier FL scheme, named FedDCT, to increase training accuracy and performance in wireless communication networks.
- Score: 1.973745731206255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid proliferation of Internet of Things (IoT) devices and the
growing concern for data privacy among the public, Federated Learning (FL) has
gained significant attention as a privacy-preserving machine learning paradigm.
FL enables the training of a global model among clients without exposing local
data. However, when a federated learning system runs on wireless communication
networks, limited wireless resources, heterogeneity of clients, and network
transmission failures affect its performance and accuracy. In this study, we
propose a novel dynamic cross-tier FL scheme, named FedDCT to increase training
accuracy and performance in wireless communication networks. We utilize a
tiering algorithm that dynamically divides clients into different tiers
according to specific indicators and assigns specific timeout thresholds to
each tier to reduce the training time required. To improve the accuracy of the
model without increasing the training time, we introduce a cross-tier client
selection algorithm that can effectively select the tiers and participants.
Simulation experiments show that our scheme can make the model converge faster
and achieve a higher accuracy in wireless communication networks.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Convergence Analysis and System Design for Federated Learning over
Wireless Networks [16.978276697446724]
Federated learning (FL) has emerged as an important and promising learning scheme in IoT.
FL training requires frequent model exchange, which is largely affected by the wireless communication network.
In this paper, we analyze the convergence rate of FL training considering the joint impact of communication network and training settings.
arXiv Detail & Related papers (2021-04-30T02:33:29Z) - On the Convergence Time of Federated Learning Over Wireless Networks
Under Imperfect CSI [28.782485580296374]
We propose a training process that takes channel statistics as a bias to minimize the convergence time under imperfect CSI.
We also examine the trade-off between number of clients involved in the training process and model accuracy as a function of different fading regimes.
arXiv Detail & Related papers (2021-04-01T08:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.