Latency Optimization for Wireless Federated Learning in Multihop Networks
- URL: http://arxiv.org/abs/2506.12081v1
- Date: Sun, 08 Jun 2025 19:10:09 GMT
- Title: Latency Optimization for Wireless Federated Learning in Multihop Networks
- Authors: Shaba Shaon, Van-Dinh Nguyen, Dinh C. Nguyen,
- Abstract summary: We study a novel latency problem in wireless federated learning (FL) across multi-hop networks.<n>We formulate an optimization problem aimed at minimizing system latency through the joint optimization of leaf and relay nodes.<n>We observe significant latency savings in the wireless multi-hop PAFL system, with reductions of up to 69.37%.
- Score: 6.99011091188975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study a novel latency minimization problem in wireless federated learning (FL) across multi-hop networks. The system comprises multiple routes, each integrating leaf and relay nodes for FL model training. We explore a personalized learning and adaptive aggregation-aware FL (PAFL) framework that effectively addresses data heterogeneity across participating nodes by harmonizing individual and collective learning objectives. We formulate an optimization problem aimed at minimizing system latency through the joint optimization of leaf and relay nodes, as well as relay routing indicator. We also incorporate an additional energy harvesting scheme for the relay nodes to help with their relay tasks. This formulation presents a computationally demanding challenge, and thus we develop a simple yet efficient algorithm based on block coordinate descent and successive convex approximation (SCA) techniques. Simulation results illustrate the efficacy of our proposed joint optimization approach for leaf and relay nodes with relay routing indicator. We observe significant latency savings in the wireless multi-hop PAFL system, with reductions of up to 69.37% compared to schemes optimizing only one node type, traditional greedy algorithm, and scheme without relay routing indicator.
Related papers
- Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - Wireless Federated Learning over UAV-enabled Integrated Sensing and Communication [2.8203310972866382]
This paper studies a new latency optimization problem in unmanned aerial vehicles (UAVs)-enabled federated learning (FL) with integrated sensing and communication.
We develop a simple yet efficient iterative algorithm to find a high-quality approximate solution, saving system latency up to 68.54% compared to benchmark schemes.
arXiv Detail & Related papers (2024-11-01T14:25:24Z) - Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Relay-Assisted Cooperative Federated Learning [10.05493937334448]
Over-the-air computation allows mobile devices to concurrently upload their local models.
Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices.
In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue.
arXiv Detail & Related papers (2021-07-20T14:06:19Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.