TinyProto: Communication-Efficient Federated Learning with Sparse Prototypes in Resource-Constrained Environments
- URL: http://arxiv.org/abs/2507.04327v1
- Date: Sun, 06 Jul 2025 10:24:33 GMT
- Title: TinyProto: Communication-Efficient Federated Learning with Sparse Prototypes in Resource-Constrained Environments
- Authors: Gyuejeong Lee, Daeyoung Choi,
- Abstract summary: Communication efficiency in federated learning (FL) remains a critical challenge for resource-constrained environments.<n>We propose TinyProto, which addresses these limitations through Class-wise Prototype Sparsification (CPS) and adaptive prototype scaling.<n>Our experiments show TinyProto reduces communication costs by up to 4x compared to existing methods while maintaining performance.
- Score: 0.8287206589886879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication efficiency in federated learning (FL) remains a critical challenge for resource-constrained environments. While prototype-based FL reduces communication overhead by sharing class prototypes-mean activations in the penultimate layer-instead of model parameters, its efficiency decreases with larger feature dimensions and class counts. We propose TinyProto, which addresses these limitations through Class-wise Prototype Sparsification (CPS) and adaptive prototype scaling. CPS enables structured sparsity by allocating specific dimensions to class prototypes and transmitting only non-zero elements, while adaptive scaling adjusts prototypes based on class distributions. Our experiments show TinyProto reduces communication costs by up to 4x compared to existing methods while maintaining performance. Beyond its communication efficiency, TinyProto offers crucial advantages: achieving compression without client-side computational overhead and supporting heterogeneous architectures, making it ideal for resource-constrained heterogeneous FL.
Related papers
- Heterogeneous Federated Learning with Prototype Alignment and Upscaling [0.7373617024876724]
Prototype Normalization (ProtoNorm) is a novel PBFL framework that addresses suboptimal prototype separation.<n>We show that our approach better separates prototypes and thus consistently outperforms existing HtFL approaches.
arXiv Detail & Related papers (2025-07-06T09:34:41Z) - Probabilistic Prototype Calibration of Vision-Language Models for Generalized Few-shot Semantic Segmentation [75.18058114915327]
Generalized Few-Shot Semanticnative (GFSS) aims to extend a segmentation model to novel classes with only a few annotated examples.<n>We propose FewCLIP, a probabilistic prototype calibration framework over multi-modal prototypes from the pretrained CLIP.<n>We show FewCLIP significantly outperforms state-of-the-art approaches across both GFSS and class-incremental setting.
arXiv Detail & Related papers (2025-06-28T18:36:22Z) - Communication-Efficient Wireless Federated Fine-Tuning for Large-Scale AI Models [13.742950928229078]
Low-Rank Adaptation (LoRA) addresses these issues by training compact, low-rank matrices instead of fully fine-tuning large models.<n>This paper introduces a wireless federated LoRA fine-tuning framework that optimize both learning performance and communication efficiency.
arXiv Detail & Related papers (2025-05-01T06:15:38Z) - Enhancing Visual Representation with Textual Semantics: Textual Semantics-Powered Prototypes for Heterogeneous Federated Learning [12.941603966989366]
Federated Prototype Learning (FedPL) has emerged as an effective strategy for handling data heterogeneity in Federated Learning (FL)<n>We propose FedTSP, a novel method that leverages PLMs to construct semantically enriched prototypes from the textual modality.<n>To address the modality gap between client image models and the PLM, we introduce trainable prompts, allowing prototypes to adapt better to client tasks.
arXiv Detail & Related papers (2025-03-16T04:35:06Z) - ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes [3.7340128675975173]
Decentralized Federated Learning (DFL) trains models in a collaborative and privacy-preserving manner.<n>This paper introduces ProFe, a novel communication optimization algorithm for DFL that combines knowledge distillation, prototype learning, and quantization techniques.
arXiv Detail & Related papers (2024-12-15T14:49:29Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.<n>To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters.<n>Global thresholds are used to update model parameters by extracting aggregated parameter importance.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization [15.73877955614998]
This paper presents a novel communication algorithm - DeComFL - which reduces the communication cost from $mathscrO(d)$ to $mathscrO(d)$ by transmitting only a constant number of scalar values between clients.<n> Empirical evaluations, encompassing both classic deep learning training and large language model fine-tuning, demonstrate significant reductions in communication overhead.
arXiv Detail & Related papers (2024-05-24T18:07:05Z) - Computation and Communication Efficient Lightweighting Vertical Federated Learning for Smart Building IoT [6.690593323665202]
IoT devices are evolving beyond basic data collection and control to actively participate in deep learning tasks.<n>We propose a Lightweight Vertical Federated Learning framework that jointly optimize computational and communication efficiency.<n> Experimental results on an image classification task demonstrate that LVFL effectively mitigates resource demands while maintaining competitive learning performance.
arXiv Detail & Related papers (2024-03-30T20:19:28Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.