Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning
- URL: http://arxiv.org/abs/2210.08823v1
- Date: Mon, 17 Oct 2022 08:14:49 GMT
- Title: Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning
- Authors: Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang
- Abstract summary: Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
- Score: 126.84770886628833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing fine-tuning methods either tune all parameters of the pre-trained
model (full fine-tuning), which is not efficient, or only tune the last linear
layer (linear probing), which suffers a significant accuracy drop compared to
the full fine-tuning. In this paper, we propose a new parameter-efficient
fine-tuning method termed as SSF, representing that researchers only need to
Scale and Shift the deep Features extracted by a pre-trained model to catch up
with the performance of full fine-tuning. In this way, SSF also surprisingly
outperforms other parameter-efficient fine-tuning approaches even with a
smaller number of tunable parameters. Furthermore, different from some existing
parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce
the extra parameters and computational cost in the training and inference
stages, SSF only adds learnable parameters during the training stage, and these
additional parameters can be merged into the original pre-trained model weights
via re-parameterization in the inference phase. With the proposed SSF, our
model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%)
performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared
to the full fine-tuning but only fine-tuning about 0.3M parameters. We also
conduct amounts of experiments in various model families (CNNs, Transformers,
and MLPs) and datasets. Results on 26 image classification datasets in total
and 3 robustness & out-of-distribution datasets show the effectiveness of SSF.
Code is available at https://github.com/dongzelian/SSF.
Related papers
- Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors [80.6043267994434]
We propose SVFT, a simple approach that fundamentally differs from existing methods.
SVFT updates (W) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations.
Experiments on language and vision benchmarks show that SVFT recovers up to 96% of full fine-tuning performance while training only 0.006 to 0.25% of parameters.
arXiv Detail & Related papers (2024-05-30T01:27:43Z) - Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning [19.17362588650503]
Low-rank Attention Side-Tuning (LAST) trains a side-network composed of only low-rank self-attention modules.
We show LAST can be highly parallel across multiple optimization objectives, making it very efficient in downstream task adaptation.
arXiv Detail & Related papers (2024-02-06T14:03:15Z) - Gradient-based Parameter Selection for Efficient Fine-Tuning [41.30092426231482]
Gradient-based.
Selection (GPS) is a new parameter-efficient fine-tuning method.
GPS does not introduce any additional parameters and computational costs during both the training and inference stages.
GPS achieves 3.33% (91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the accuracy with tuning only 0.36% parameters of the pre-trained model on average over 24 image classification tasks.
arXiv Detail & Related papers (2023-12-15T18:59:05Z) - Parameter-Efficient Fine-Tuning without Introducing New Latency [7.631596468553607]
We introduce a novel adapter technique that directly applies the adapter to pre-trained parameters instead of the hidden representation.
Our proposed method attains a new state-of-the-art outcome in terms of both performance and storage efficiency, storing only 0.03% parameters of full fine-tuning.
arXiv Detail & Related papers (2023-05-26T08:44:42Z) - Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning [91.5113227694443]
We propose a novel visual.
sensuous-aware fine-Tuning (SPT) scheme.
SPT allocates trainable parameters to task-specific important positions.
Experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods.
arXiv Detail & Related papers (2023-03-15T12:34:24Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of
Large-Scale Pre-Trained Language Models [19.640997611256168]
We propose AlphaTuning, consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task.
Specifically, AlphaTuning works by employing binary-coding quantization, which factorizes the full-precision parameters into binary parameters and a separate set of scaling factors.
We demonstrate that AlphaTuning, when applied to GPT-2 and OPT, performs competitively with full fine-tuning on a variety of downstream tasks while achieving >10x compression ratio under 4-bit quantization and >1,000x reduction in the number of trainable parameters.
arXiv Detail & Related papers (2022-10-08T00:36:00Z) - Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning [81.3514358542452]
Few-shot in-context learning (ICL) incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.
parameter-efficient fine-tuning offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task.
In this paper, we rigorously compare few-shot ICL and parameter-efficient fine-tuning and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs.
arXiv Detail & Related papers (2022-05-11T17:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.