FedSparQ: Adaptive Sparse Quantization with Error Feedback for Robust & Efficient Federated Learning
- URL: http://arxiv.org/abs/2511.05591v1
- Date: Wed, 05 Nov 2025 12:38:08 GMT
- Title: FedSparQ: Adaptive Sparse Quantization with Error Feedback for Robust & Efficient Federated Learning
- Authors: Chaimaa Medjadji, Sadi Alawadi, Feras M. Awaysheh, Guilain Leduc, Sylvain Kubler, Yves Le Traon,
- Abstract summary: Federated Learning (FL) enables collaborative model training across decentralized clients.<n>FL suffers from significant communication overhead due to the frequent exchange of high-dimensional model updates over constrained networks.<n>We present FedSparQ, a lightweight compression framework that dynamically sparsifies the gradient of each client.
- Score: 7.461859467262201
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative model training across decentralized clients while preserving data privacy by keeping raw data local. However, FL suffers from significant communication overhead due to the frequent exchange of high-dimensional model updates over constrained networks. In this paper, we present FedSparQ, a lightweight compression framework that dynamically sparsifies the gradient of each client through an adaptive threshold, applies half-precision quanti- zation to retained entries and integrates residuals from error feedback to prevent loss of information. FedSparQ requires no manual tuning of sparsity rates or quantization schedules, adapts seamlessly to both homogeneous and heterogeneous data distributions, and is agnostic to model architecture. Through extensive empirical evaluation on vision benchmarks under independent and identically distributed (IID) and non-IID data, we show that FedSparQ substantially reduces communication overhead (reducing by 90% of bytes sent compared to FedAvg) while preserving or improving model accuracy (improving by 6% compared to FedAvg non-compressed solution or to state-of-the- art compression models) and enhancing convergence robustness (by 50%, compared to the other baselines). Our approach provides a practical, easy-to-deploy solution for bandwidth- constrained federated deployments and lays the groundwork for future extensions in adaptive precision and privacy-preserving protocols.
Related papers
- FedZMG: Efficient Client-Side Optimization in Federated Learning [0.19116784879310023]
Federated Zero Mean Gradients (FedZMG) is a parameter-free, client-side optimization algorithm designed to tackle client-drift.<n>FedZMG projects local gradients onto a zero-mean hyperplane, effectively neutralizing the "intensity" or "bias" shifts inherent in heterogeneous data distributions.
arXiv Detail & Related papers (2026-02-20T17:45:28Z) - Fractional-Order Federated Learning [4.1751058176413105]
Federated learning (FL) allows remote clients to train a global model collaboratively while protecting client privacy.<n>Despite its privacy-preserving benefits, FL has significant drawbacks, including slow convergence, high communication cost, and non-independent-and-identically-distributed (non-IID) data.
arXiv Detail & Related papers (2026-02-17T06:25:23Z) - ERIS: Enhancing Privacy and Communication Efficiency in Serverless Federated Learning [6.486831630436399]
ERIS is a serverless FL framework that balances privacy and accuracy while eliminating the server bottleneck and distributing the communication load.<n>We theoretically prove that ERIS converges at the same rate as FedAvg under standard assumptions, and (ii) bounds mutual information leakage inversely with the number of aggregators.
arXiv Detail & Related papers (2026-02-09T13:05:41Z) - Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - FedKLPR: Personalized Federated Learning for Person Re-Identification with Adaptive Pruning [6.3531448415573655]
Person re-identification (Re-ID) is a fundamental task in intelligent surveillance and public safety.<n>Applying Federated Learning (FL) to real-world re-ID systems faces two major challenges.<n>We propose FedKLPR, a lightweight and communication-efficient framework for person re-identification.
arXiv Detail & Related papers (2025-08-24T16:11:41Z) - Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning [25.121545962121907]
Federated fine-tuning (FedFT) of large language models (LLMs) has emerged as a promising solution for adapting models to distributed data environments.<n>We propose FedBE, a novel FedFT framework that integrates an adaptive transformer block expansion mechanism with a dynamic trainable-block allocation strategy.<n>We show that FedBE achieves 12-74% higher accuracy retention on general tasks after fine-tuning and a model convergence acceleration ratio of 1.9-3.1x without degrading the accuracy of downstream tasks.
arXiv Detail & Related papers (2025-06-06T10:59:11Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - FedBIAD: Communication-Efficient and Accuracy-Guaranteed Federated
Learning with Bayesian Inference-Based Adaptive Dropout [14.72932631655587]
Federated Learning (FL) emerges as a distributed machine learning paradigm without end-user data transmission.
FedBIAD provides 2x uplink reduction with an accuracy increase of up to 2.41% even on non-Independent and Identically Distributed (non-IID) data.
arXiv Detail & Related papers (2023-07-14T05:51:04Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.