OLALa: Online Learned Adaptive Lattice Codes for Heterogeneous Federated Learning
- URL: http://arxiv.org/abs/2506.20297v1
- Date: Wed, 25 Jun 2025 10:18:34 GMT
- Title: OLALa: Online Learned Adaptive Lattice Codes for Heterogeneous Federated Learning
- Authors: Natalie Lang, Maya Simhi, Nir Shlezinger,
- Abstract summary: Federated learning (FL) enables collaborative training across distributed clients without sharing raw data.<n>We propose Online Learned Adaptive Lattices (OLALa), a heterogeneous FL framework where each client can adjust its quantizer online.<n>OLALa consistently improves learning performance under various quantization rates, outperforming conventional fixed-codebook and non-adaptive schemes.
- Score: 24.595304301100047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables collaborative training across distributed clients without sharing raw data, often at the cost of substantial communication overhead induced by transmitting high-dimensional model updates. This overhead can be alleviated by having the clients quantize their model updates, with dithered lattice quantizers identified as an attractive scheme due to its structural simplicity and convergence-preserving properties. However, existing lattice-based FL schemes typically rely on a fixed quantization rule, which is suboptimal in heterogeneous and dynamic environments where the model updates distribution varies across users and training rounds. In this work, we propose Online Learned Adaptive Lattices (OLALa), a heterogeneous FL framework where each client can adjust its quantizer online using lightweight local computations. We first derive convergence guarantees for FL with non-fixed lattice quantizers and show that proper lattice adaptation can tighten the convergence bound. Then, we design an online learning algorithm that enables clients to tune their quantizers throughout the FL process while exchanging only a compact set of quantization parameters. Numerical experiments demonstrate that OLALa consistently improves learning performance under various quantization rates, outperforming conventional fixed-codebook and non-adaptive schemes.
Related papers
- An Adaptive ML Framework for Power Converter Monitoring via Federated Transfer Learning [0.0]
This study explores alternative framework configurations for adapting thermal machine learning (ML) models for power converters.<n>The framework starts with a base model that is incrementally adapted by multiple clients via adapting three state-of-the-art domain adaptation techniques.<n> Validation with field data demonstrates that fine-tuning offers a straightforward TL approach with high accuracy, making it suitable for practical applications.
arXiv Detail & Related papers (2025-04-23T16:39:54Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.<n>Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Tailored Federated Learning: Leveraging Direction Regulation & Knowledge Distillation [2.1670850691529275]
Federated learning has emerged as a transformative training paradigm in privacy-sensitive domains like healthcare.
We propose an FL optimization algorithm that integrates model delta regularization, personalized models, federated knowledge distillation, and mix-pooling.
arXiv Detail & Related papers (2024-09-29T15:39:39Z) - Efficient Model Compression for Hierarchical Federated Learning [10.37403547348343]
Federated learning (FL) has garnered significant attention due to its capacity to preserve privacy within distributed learning systems.
This paper introduces a novel hierarchical FL framework that integrates the benefits of clustered FL and model compression.
arXiv Detail & Related papers (2024-05-27T12:17:47Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees [18.24213566328972]
Decentralized learning computation (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are carried out by the clients without a central server.<n>$textttDSpodFL$, a DFL methodology built on a generalized notion of $textitsporadicity$ in both local gradient and aggregation processes.<n>$textttDSpodFL$ consistently achieves improved speeds compared with baselines under various system settings.
arXiv Detail & Related papers (2024-02-05T19:02:19Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - AQUILA: Communication Efficient Federated Learning with Adaptive
Quantization in Device Selection Strategy [27.443439653087662]
This paper introduces AQUILA (adaptive quantization in device selection strategy), a novel adaptive framework devised to handle these issues.
AQUILA integrates a sophisticated device selection method that prioritizes the quality and usefulness of device updates.
Our experiments demonstrate that AQUILA significantly decreases communication costs compared to existing methods.
arXiv Detail & Related papers (2023-08-01T03:41:47Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.