Evaluating Parameter Efficient Learning for Generation
- URL: http://arxiv.org/abs/2210.13673v1
- Date: Tue, 25 Oct 2022 00:14:48 GMT
- Title: Evaluating Parameter Efficient Learning for Generation
- Authors: Peng Xu, Mostofa Patwary, Shrimai Prabhumoye, Virginia Adams, Ryan J.
Prenger, Wei Ping, Nayeon Lee, Mohammad Shoeybi and Bryan Catanzaro
- Abstract summary: We present comparisons between PERMs and finetuning from three new perspectives.
Our results show that for in-domain settings (a) there is a cross point of sample size for which PERMs will perform better than finetuning when training with fewer samples, and (b) larger PLMs.
We also compare the faithfulness of generations and show that PERMs can achieve better faithfulness score than finetuning, especially for small training set, by as much as 6%.
- Score: 32.52577462253145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter efficient learning methods (PERMs) have recently gained significant
attention as they provide an efficient way for pre-trained language models
(PLMs) to adapt to a downstream task. However, these conclusions are mostly
drawn from in-domain evaluations over the full training set. In this paper, we
present comparisons between PERMs and finetuning from three new perspectives:
(1) the effect of sample and model size to in-domain evaluations, (2)
generalization to unseen domains and new datasets, and (3) the faithfulness of
generations. Our results show that for in-domain settings (a) there is a cross
point of sample size for which PERMs will perform better than finetuning when
training with fewer samples, and (b) larger PLMs have larger cross points. For
cross-domain and cross-dataset cases, we show that (a) Adapter (Houlsby et al.,
2019) performs the best amongst all the PERMs studied here, and (b) it
outperforms finetuning if the task dataset is below a certain size. We also
compare the faithfulness of generations and show that PERMs can achieve better
faithfulness score than finetuning, especially for small training set, by as
much as 6%. Finally, we apply Adapter to MT-NLG 530b (Smith et al., 2022) and
achieve new state-of-the-art results on Xsum (Narayan et al., 2018) for all
ROUGE scores (ROUGE-1 49.17, ROUGE-2 27.20, ROUGE-L 40.98).
Related papers
- Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation [51.127054971591924]
We introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples.
We demonstrate that 74% of the improvement from using 16 samples can be achieved with only 1.2 samples on average.
arXiv Detail & Related papers (2024-10-03T17:47:29Z) - Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models [43.26028399395612]
We propose a Fisher Information-based Efficient Curriculum Federated Learning framework (FibecFed) with two novel methods.
First, we propose a fisher information-based method to adaptively sample data within each device to improve the effectiveness of the FL fine-tuning process.
Second, we dynamically select the proper layers for global aggregation and sparse parameters for local update with LoRA.
arXiv Detail & Related papers (2024-09-30T18:12:18Z) - Fine-tuning Large Language Models for Entity Matching [3.7277730514654555]
Generative large language models (LLMs) are a promising alternative to pre-trained language models for entity matching.
This paper explores the potential of fine-tuning LLMs for entity matching.
arXiv Detail & Related papers (2024-09-12T16:20:57Z) - NUDGE: Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval [0.7646713951724011]
Existing approaches either fine-tune the pre-trained model itself or, more efficiently, train adaptor models to transform the output of the pre-trained model.
We present NUDGE, a family of novel non-parametric embedding fine-tuning approaches.
NUDGE directly modifies the embeddings of data records to maximize the accuracy of $k$-NN retrieval.
arXiv Detail & Related papers (2024-09-04T00:10:36Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Parameter-Efficient Fine-Tuning With Adapters [5.948206235442328]
This research introduces a novel adaptation method utilizing the UniPELT framework as a base.
Our method employs adapters, which enable efficient transfer of pretrained models to new tasks with minimal retraining of the base model parameters.
arXiv Detail & Related papers (2024-05-09T01:40:38Z) - DoGE: Domain Reweighting with Generalization Estimation [42.32000165235568]
We propose DOmain reweighting with Generalization Estimation (DoGE)
In our experiments, we extensively show how DoGE improves the generalization of the base model to any target data mixture.
DoGE can effectively identify inter-domain dependencies, and consistently achieves better test perplexity on the target domain.
arXiv Detail & Related papers (2023-10-23T22:51:58Z) - General vs. Long-Tailed Age Estimation: An Approach to Kill Two Birds
with One Stone [48.849311629912734]
We propose a simple, effective, and flexible training paradigm named GLAE, which is two-fold.
Our GLAE provides a surprising improvement on Morph II, reaching the lowest MAE and CMAE of 1.14 and 1.27 years, respectively.
arXiv Detail & Related papers (2023-07-19T16:51:59Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language
Models [152.29364079385635]
As pre-trained models grow bigger, the fine-tuning process can be time-consuming and computationally expensive.
We propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.
Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning and (ii) resource-efficient inference.
arXiv Detail & Related papers (2021-10-30T03:29:47Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.