Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments
- URL: http://arxiv.org/abs/2507.17772v1
- Date: Sat, 19 Jul 2025 17:02:15 GMT
- Title: Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments
- Authors: Ahmad Alhonainy, Praveen Rao,
- Abstract summary: Federated Learning (FL) allows multiple devices to jointly train a shared model without centralizing data.<n>This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions.
- Score: 2.942616054218564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) allows multiple distributed devices to jointly train a shared model without centralizing data, but communication cost remains a major bottleneck, especially in resource-constrained environments. This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions. By selectively forwarding significant updates, our approach lowers bandwidth usage while maintaining model accuracy. Experiments on CIFAR-10 and medical datasets show reduced communication with minimal accuracy loss. Results confirm that intelligent caching improves scalability, memory efficiency, and supports reliable FL in edge IoT networks, making it practical for deployment in smart cities, healthcare, and other latency-sensitive applications.
Related papers
- Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation [2.1617267802631366]
Federated Learning (FL) enables collaborative model training across decentralized clients, enhancing privacy by keeping data local.<n>We propose SCARLET, a novel framework integrating synchronized soft-label caching and an enhanced Entropy Reduction Aggregation (Enhanced ERA) mechanism.<n>SCARLET minimizes redundant communication by reusing cached soft-labels, achieving up to 50% reduction in communication costs compared to existing methods.
arXiv Detail & Related papers (2025-04-28T09:04:30Z) - Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks [41.23236059700041]
Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets.<n>Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates.<n>In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect.
arXiv Detail & Related papers (2024-12-14T16:08:05Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLARE: Detection and Mitigation of Concept Drift for Federated Learning
based IoT Deployments [2.7776688429637466]
FLARE is a lightweight dual-scheduler FL framework that conditionally transfers training data and deploys models between edge and sensor endpoints.
We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods.
It can successfully detect concept drift reactively with at least a 16x reduction in latency.
arXiv Detail & Related papers (2023-05-15T10:09:07Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.