Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks
- URL: http://arxiv.org/abs/2409.09273v1
- Date: Sat, 14 Sep 2024 02:54:31 GMT
- Title: Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks
- Authors: S. Kawa Atapour, S. Jamal SeyedMohammadi, S. Mohammad Sheikholeslami, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi,
- Abstract summary: This paper proposes a novel framework, namely, Federated Distilling knowledge to Prompt (FedD2P)
This framework distills the aggregated knowledge of IoT devices to a prompt generator to efficiently adapt the frozen FM for downstream tasks.
Our experiments on diverse image classification datasets show that FedD2P outperforms the baselines in terms of model performance.
- Score: 17.571552686063335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently pre-trained Foundation Models (FMs) have been combined with Federated Learning (FL) to improve training of downstream tasks while preserving privacy. However, deploying FMs over edge networks with resource-constrained Internet of Things (IoT) devices is under-explored. This paper proposes a novel framework, namely, Federated Distilling knowledge to Prompt (FedD2P), for leveraging the robust representation abilities of a vision-language FM without deploying it locally on edge devices. This framework distills the aggregated knowledge of IoT devices to a prompt generator to efficiently adapt the frozen FM for downstream tasks. To eliminate the dependency on a public dataset, our framework leverages perclass local knowledge from IoT devices and linguistic descriptions of classes to train the prompt generator. Our experiments on diverse image classification datasets CIFAR, OxfordPets, SVHN, EuroSAT, and DTD show that FedD2P outperforms the baselines in terms of model performance.
Related papers
- EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge [15.559604113977294]
We propose EdgeFM, a novel edge-cloud cooperative system with open-set recognition capability.
We show that EdgeFM can reduce the end-to-end latency up to 3.2x and achieve 34.3% accuracy increase compared with the baseline.
arXiv Detail & Related papers (2023-11-18T06:40:39Z) - The Role of Federated Learning in a Wireless World with Foundation Models [59.8129893837421]
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
Currently, the exploration of the interplay between FMs and federated learning (FL) is still in its nascent stage.
This article explores the extent to which FMs are suitable for FL over wireless networks, including a broad overview of research challenges and opportunities.
arXiv Detail & Related papers (2023-10-06T04:13:10Z) - Toward efficient resource utilization at edge nodes in federated learning [0.6990493129893112]
Federated learning enables edge nodes to collaboratively contribute to constructing a global model without sharing their data.
computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning applications.
We propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices.
arXiv Detail & Related papers (2023-09-19T07:04:50Z) - Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.