EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge
- URL: http://arxiv.org/abs/2311.10986v3
- Date: Thu, 23 Nov 2023 04:44:00 GMT
- Title: EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge
- Authors: Bufang Yang, Lixing He, Neiwen Ling, Zhenyu Yan, Guoliang Xing, Xian
Shuai, Xiaozhe Ren, Xin Jiang
- Abstract summary: We propose EdgeFM, a novel edge-cloud cooperative system with open-set recognition capability.
We show that EdgeFM can reduce the end-to-end latency up to 3.2x and achieve 34.3% accuracy increase compared with the baseline.
- Score: 15.559604113977294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) models have been widely deployed on IoT devices with the
help of advancements in DL algorithms and chips. However, the limited resources
of edge devices make these on-device DL models hard to be generalizable to
diverse environments and tasks. Although the recently emerged foundation models
(FMs) show impressive generalization power, how to effectively leverage the
rich knowledge of FMs on resource-limited edge devices is still not explored.
In this paper, we propose EdgeFM, a novel edge-cloud cooperative system with
open-set recognition capability. EdgeFM selectively uploads unlabeled data to
query the FM on the cloud and customizes the specific knowledge and
architectures for edge models. Meanwhile, EdgeFM conducts dynamic model
switching at run-time taking into account both data uncertainty and dynamic
network variations, which ensures the accuracy always close to the original FM.
We implement EdgeFM using two FMs on two edge platforms. We evaluate EdgeFM on
three public datasets and two self-collected datasets. Results show that EdgeFM
can reduce the end-to-end latency up to 3.2x and achieve 34.3% accuracy
increase compared with the baseline.
Related papers
- Specialized Foundation Models Struggle to Beat Supervised Baselines [60.23386520331143]
We look at three modalities -- genomics, satellite imaging, and time series -- with multiple recent FMs and compare them to a standard supervised learning workflow.
We find that it is consistently possible to train simple supervised models that match or even outperform the latest foundation models.
arXiv Detail & Related papers (2024-11-05T04:10:59Z) - Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks [17.571552686063335]
This paper proposes a novel framework, namely, Federated Distilling knowledge to Prompt (FedD2P)
This framework distills the aggregated knowledge of IoT devices to a prompt generator to efficiently adapt the frozen FM for downstream tasks.
Our experiments on diverse image classification datasets show that FedD2P outperforms the baselines in terms of model performance.
arXiv Detail & Related papers (2024-09-14T02:54:31Z) - Synergizing Foundation Models and Federated Learning: A Survey [23.416321895575507]
This paper discusses the potentials and challenges of synergizing Federated Learning (FL) and Foundation Models (FM)
FL is a collaborative learning paradigm that breaks the barrier of data availability from different participants.
It provides a promising solution to customize and adapt FMs to a wide range of domain-specific tasks using distributed datasets whilst preserving privacy.
arXiv Detail & Related papers (2024-06-18T17:58:09Z) - FedPFT: Federated Proxy Fine-Tuning of Foundation Models [55.58899993272904]
Adapting Foundation Models (FMs) for downstream tasks through Federated Learning (FL) emerges as a promising strategy for protecting data privacy and valuable FMs.
Existing methods fine-tune FM by allocating sub-FM to clients in FL, leading to suboptimal performance due to insufficient tuning and inevitable error accumulations of gradients.
We propose Federated Proxy Fine-Tuning (FedPFT), a novel method enhancing FMs adaptation in downstream tasks through FL by two key modules.
arXiv Detail & Related papers (2024-04-17T16:30:06Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - The Role of Federated Learning in a Wireless World with Foundation Models [59.8129893837421]
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
Currently, the exploration of the interplay between FMs and federated learning (FL) is still in its nascent stage.
This article explores the extent to which FMs are suitable for FL over wireless networks, including a broad overview of research challenges and opportunities.
arXiv Detail & Related papers (2023-10-06T04:13:10Z) - VideoGLUE: Video General Understanding Evaluation of Foundation Models [89.07145427268948]
We evaluate video understanding capabilities of foundation models (FMs) using a carefully designed experiment protocol.
We jointly profile FMs' hallmark and efficacy efficiency when adapting to general video understanding tasks.
arXiv Detail & Related papers (2023-07-06T17:47:52Z) - FedICT: Federated Multi-task Distillation for Multi-access Edge
Computing [11.940976899954531]
Federated MultI-task Distillation for Multi-access Edge CompuTing (FedICT) is proposed.
FedICT direct local-global knowledge aloof during bi-directional distillation processes between clients and the server.
FedICT significantly outperforms all compared benchmarks in various data heterogeneous and model architecture settings.
arXiv Detail & Related papers (2023-01-01T11:50:58Z) - Federated Learning Using Three-Operator ADMM [13.890395923545181]
Federated learning (FL) avoids the transmission of data generated on the users' side.
We propose FedTOP-ADMM, which exploits a smooth cost function on the edge server to learn a global model parallel to the edge devices.
arXiv Detail & Related papers (2022-11-08T10:50:29Z) - Boosting Factorization Machines via Saliency-Guided Mixup [125.15872106335692]
We present MixFM, inspired by Mixup, to generate auxiliary training data to boost Factorization machines (FMs)
We also put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM)
arXiv Detail & Related papers (2022-06-17T09:49:00Z) - Leaf-FM: A Learnable Feature Generation Factorization Machine for
Click-Through Rate Prediction [2.412497918389292]
We propose LeafFM model based on FM to generate new features from the original feature embedding by learning the transformation functions automatically.
Experiments are conducted on three real-world datasets and the results show Leaf-FM model outperforms standard FMs by a large margin.
arXiv Detail & Related papers (2021-07-26T08:29:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.