Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks
- URL: http://arxiv.org/abs/2412.10878v1
- Date: Sat, 14 Dec 2024 16:08:05 GMT
- Title: Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks
- Authors: Afsaneh Mahmoudi, Emil Björnson,
- Abstract summary: Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets.
Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates.
In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect.
- Score: 41.23236059700041
- License:
- Abstract: Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets, preserving data privacy and reducing communication overhead. However, the latency grows with the number of users and the model size, impeding the successful FL over traditional wireless networks with orthogonal access. Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates. This architecture greatly reduces uplink latency through spatial multiplexing but does not take application characteristics into account. In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect. We introduce a novel adaptive mixed-resolution quantization scheme of the local gradient vector updates, where only the most essential entries are given high resolution. Thereafter, we propose a dynamic uplink power control scheme to manage the varying user rates and mitigate the straggler effect. The numerical results demonstrate that the proposed method achieves test accuracy comparable to classic FL while reducing communication overhead by at least 93% on the CIFAR-10, CIFAR-100, and Fashion-MNIST datasets. We compare our methods against AQUILA, Top-q, and LAQ, using the max-sum rate and Dinkelbach power control schemes. Our approach reduces the communication overhead by 75% and achieves 10% higher test accuracy than these benchmarks within a constrained total latency budget.
Related papers
- Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization [45.99908087352264]
Federated Learning (FL) enables clients to share learning parameters instead of local data, reducing communication overhead.
Traditional wireless networks face latency challenges with FL.
We propose an energy-efficient, low-latency FL framework featuring optimized uplink power allocation for seamless client-server collaboration.
arXiv Detail & Related papers (2024-12-30T08:10:21Z) - Joint Energy and Latency Optimization in Federated Learning over Cell-Free Massive MIMO Networks [36.6868658064971]
Federated learning (FL) is a distributed learning paradigm wherein users exchange FL models with a server instead of raw datasets.
Cell-free massive multiple-input multiple-output(CFmMIMO) is a promising architecture for implementing FL because it serves many users on the same time/frequency resources.
We propose an uplink power allocation scheme in FL over CFmMIMO by considering the effect of each user's power on the energy and latency of other users.
arXiv Detail & Related papers (2024-04-28T19:24:58Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Federated Learning with Flexible Control [30.65854375019346]
Federated learning (FL) enables distributed model training from local data collected by users.
In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem.
We propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.
arXiv Detail & Related papers (2022-12-16T14:21:29Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.