Towards Efficient Task-Driven Model Reprogramming with Foundation Models
- URL: http://arxiv.org/abs/2304.02263v2
- Date: Sat, 6 May 2023 08:57:13 GMT
- Title: Towards Efficient Task-Driven Model Reprogramming with Foundation Models
- Authors: Shoukai Xu, Jiangchao Yao, Ran Luo, Shuhai Zhang, Zihao Lian, Mingkui
Tan, Bo Han, Yaowei Wang
- Abstract summary: Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
- Score: 52.411508216448716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision foundation models exhibit impressive power, benefiting from the
extremely large model capacity and broad training data. However, in practice,
downstream scenarios may only support a small model due to the limited
computational resources or efficiency considerations. Moreover, the data used
for pretraining foundation models are usually invisible and very different from
the target data of downstream tasks. This brings a critical challenge for the
real-world application of foundation models: one has to transfer the knowledge
of a foundation model to the downstream task that has a quite different
architecture with only downstream target data. Existing transfer learning or
knowledge distillation methods depend on either the same model structure or
finetuning of the foundation model. Thus, naively introducing these methods can
be either infeasible or very inefficient. To address this, we propose a
Task-Driven Model Reprogramming (TDMR) framework. Specifically, we reprogram
the foundation model to project the knowledge into a proxy space, which
alleviates the adverse effect of task mismatch and domain inconsistency. Then,
we reprogram the target model via progressive distillation from the proxy space
to efficiently learn the knowledge from the reprogrammed foundation model. TDMR
is compatible with different pre-trained model types (CNN, transformer or their
mix) and limited target data, and promotes the wide applications of vision
foundation models to downstream tasks in a cost-effective manner. Extensive
experiments on different downstream classification tasks and target model
structures demonstrate the effectiveness of our methods with both CNNs and
transformer foundation models.
Related papers
- Reprogramming Distillation for Medical Foundation Models [37.52464627899668]
We propose a novel framework called Reprogramming Distillation (RD)
RD reprograms the original feature space of the foundation model so that it is more relevant to downstream scenarios.
RD consistently achieve superior performance compared with previous PEFT and KD methods.
arXiv Detail & Related papers (2024-07-09T02:17:51Z) - The Role of Model Architecture and Scale in Predicting Molecular Properties: Insights from Fine-Tuning RoBERTa, BART, and LLaMA [0.0]
This study introduces a systematic framework to compare the efficacy of Large Language Models (LLMs) for fine-tuning across various cheminformatics tasks.
We assessed three well-known models-RoBERTa, BART, and LLaMA-on their ability to predict molecular properties.
We found that LLaMA-based models generally offered the lowest validation loss, suggesting their superior adaptability across tasks and scales.
arXiv Detail & Related papers (2024-05-02T02:20:12Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.