Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications
- URL: http://arxiv.org/abs/2507.11183v1
- Date: Tue, 15 Jul 2025 10:37:59 GMT
- Title: Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications
- Authors: Dimitrios Kritsiolis, Constantine Kotropoulos,
- Abstract summary: Federated learning is a machine learning approach that enables multiple devices (i.e., agents) to train a shared model cooperatively without exchanging raw data.<n>This technique keeps data localized on user devices, ensuring privacy and security, while each agent trains the model on their own data and only shares model updates.<n>The communication overhead is a significant challenge due to the frequent exchange of model updates between the agents and the central server.<n>We propose a communication-efficient federated learning scheme that utilizes low-rank approximation of neural network gradients and quantization to significantly reduce the network load of the decentralized learning process with minimal impact on the model'
- Score: 1.8416014644193066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a machine learning approach that enables multiple devices (i.e., agents) to train a shared model cooperatively without exchanging raw data. This technique keeps data localized on user devices, ensuring privacy and security, while each agent trains the model on their own data and only shares model updates. The communication overhead is a significant challenge due to the frequent exchange of model updates between the agents and the central server. In this paper, we propose a communication-efficient federated learning scheme that utilizes low-rank approximation of neural network gradients and quantization to significantly reduce the network load of the decentralized learning process with minimal impact on the model's accuracy.
Related papers
- Optimizing Model Splitting and Device Task Assignment for Deceptive Signal Assisted Private Multi-hop Split Learning [58.620753467152376]
In our model, several edge devices jointly perform collaborative training, and some eavesdroppers aim to collect the model and data information from devices.<n>To prevent the eavesdroppers from collecting model and data information, a subset of devices can transmit deceptive signals.<n>We propose a soft actor-critic deep reinforcement learning framework with intrinsic curiosity module and cross-attention.
arXiv Detail & Related papers (2025-07-09T22:53:23Z) - An Adaptive Clustering Scheme for Client Selections in Communication-Efficient Federated Learning [3.683202928838613]
Federated learning is a novel decentralized learning architecture.<n>We propose to dynamically adjust the number of clusters to find the most ideal grouping results.<n>It may reduce the number of users participating in the training to achieve the effect of reducing communication costs without affecting the model performance.
arXiv Detail & Related papers (2025-04-11T08:43:12Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Communication-Efficient Federated Learning through Adaptive Weight
Clustering and Server-Side Distillation [10.541541376305245]
Federated Learning (FL) is a promising technique for the collaborative training of deep neural networks across multiple devices.
FL is hindered by excessive communication costs due to repeated server-client communication during training.
We propose FedCompress, a novel approach that combines dynamic weight clustering and server-side knowledge distillation.
arXiv Detail & Related papers (2024-01-25T14:49:15Z) - Learning-based adaption of robotic friction models [50.72489248401199]
We introduce a novel approach to adapt an existing friction model to new dynamics using as little data as possible.<n>Our method does not rely on data with external load during training, eliminating the need for external torque sensors.
arXiv Detail & Related papers (2023-10-25T14:50:15Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks [5.914766366715661]
Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - Federated Pruning: Improving Neural Network Efficiency with Federated
Learning [24.36174705715827]
We propose Federated Pruning to train a reduced model under the federated setting.
We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
arXiv Detail & Related papers (2022-09-14T00:48:37Z) - Federated Two-stage Learning with Sign-based Voting [45.2715985913761]
Federated learning is a distributed machine learning mechanism where local devices collaboratively train a shared global model.
Recent larger and deeper machine learning models also pose more difficulties in deploying them in a federated environment.
In this paper, we design a two-stage learning framework that augments prototypical federated learning with a cut layer on devices.
arXiv Detail & Related papers (2021-12-10T17:31:23Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.