PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
- URL: http://arxiv.org/abs/2206.05703v1
- Date: Sun, 12 Jun 2022 09:45:16 GMT
- Title: PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
- Authors: Sanghoon Myung, In Huh, Wonik Jang, Jae Myung Choe, Jisu Ryu, Dae Sin
Kim, Kee-Eung Kim, Changwook Jeong
- Abstract summary: PAC-Net is a simple yet effective approach for transfer learning based on pruning.
PAC-Net consists of three steps: Prune, Allocate, and Calibrate.
Under the various and extensive set of inductive transfer learning experiments, we show that our method achieves state-of-the-art performance by a large margin.
- Score: 16.153557870191488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inductive transfer learning aims to learn from a small amount of training
data for the target task by utilizing a pre-trained model from the source task.
Most strategies that involve large-scale deep learning models adopt
initialization with the pre-trained model and fine-tuning for the target task.
However, when using over-parameterized models, we can often prune the model
without sacrificing the accuracy of the source task. This motivates us to adopt
model pruning for transfer learning with deep learning models. In this paper,
we propose PAC-Net, a simple yet effective approach for transfer learning based
on pruning. PAC-Net consists of three steps: Prune, Allocate, and Calibrate
(PAC). The main idea behind these steps is to identify essential weights for
the source task, fine-tune on the source task by updating the essential
weights, and then calibrate on the target task by updating the remaining
redundant weights. Under the various and extensive set of inductive transfer
learning experiments, we show that our method achieves state-of-the-art
performance by a large margin.
Related papers
- Simplified Temporal Consistency Reinforcement Learning [19.814047499837084]
We show that a simple representation learning approach relying on a latent dynamics model trained by latent temporal consistency is sufficient for high-performance RL.
Our approach outperforms model-free methods by a large margin and matches model-based methods' sample efficiency while training 2.4 times faster.
arXiv Detail & Related papers (2023-06-15T19:37:43Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Scalable Weight Reparametrization for Efficient Transfer Learning [10.265713480189486]
Efficient transfer learning involves utilizing a pre-trained model trained on a larger dataset and repurposing it for downstream tasks.
Previous works have led to an increase in updated parameters and task-specific modules, resulting in more computations, especially for tiny models.
We suggest learning a policy network that can decide where to reparametrize the pre-trained model, while adhering to a given constraint for the number of updated parameters.
arXiv Detail & Related papers (2023-02-26T23:19:11Z) - Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for
Downstream Tasks [55.431048995662714]
We create a small model for a new task from the pruned models of similar tasks.
We show that a few fine-tuning steps on this model suffice to produce a promising pruned-model for the new task.
We develop a simple but effective ''Meta-Vote Pruning (MVP)'' method that significantly reduces the pruning iterations for a new task.
arXiv Detail & Related papers (2023-01-27T06:49:47Z) - Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models [89.44031286278347]
We propose a Hub-Pathway framework to enable knowledge transfer from a model hub.
The proposed framework can be trained end-to-end with the target task-specific loss.
Experiment results on computer vision and reinforcement learning tasks demonstrate that the framework achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-06-08T08:00:12Z) - Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative
Priors [59.93972277761501]
We show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches.
This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks.
arXiv Detail & Related papers (2022-05-20T16:19:30Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Bridging Pre-trained Models and Downstream Tasks for Source Code
Understanding [13.65914588243695]
We propose an approach to bridge pre-trained models and code-related tasks.
We exploit semantic-preserving transformation to enrich downstream data diversity.
We introduce curriculum learning to organize the transformed data in an easy-to-hard manner to fine-tune existing pre-trained models.
arXiv Detail & Related papers (2021-12-04T07:21:28Z) - Merging Models with Fisher-Weighted Averaging [24.698591753644077]
We introduce a fundamentally different method for transferring knowledge across models that amounts to "merging" multiple models into one.
Our approach effectively involves computing a weighted average of the models' parameters.
We show that our merging procedure makes it possible to combine models in previously unexplored ways.
arXiv Detail & Related papers (2021-11-18T17:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.