Intrinsic Dimensionality Explains the Effectiveness of Language Model
Fine-Tuning
- URL: http://arxiv.org/abs/2012.13255v1
- Date: Tue, 22 Dec 2020 07:42:30 GMT
- Title: Intrinsic Dimensionality Explains the Effectiveness of Language Model
Fine-Tuning
- Authors: Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta
- Abstract summary: We argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions.
We empirically show that common pre-trained models have a very low intrinsic dimension.
- Score: 52.624194343095304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although pretrained language models can be fine-tuned to produce
state-of-the-art results for a very wide range of language understanding tasks,
the dynamics of this process are not well understood, especially in the low
data regime. Why can we use relatively vanilla gradient descent algorithms
(e.g., without strong regularization) to tune a model with hundreds of millions
of parameters on datasets with only hundreds or thousands of labeled examples?
In this paper, we argue that analyzing fine-tuning through the lens of
intrinsic dimension provides us with empirical and theoretical intuitions to
explain this remarkable phenomenon. We empirically show that common pre-trained
models have a very low intrinsic dimension; in other words, there exists a low
dimension reparameterization that is as effective for fine-tuning as the full
parameter space. For example, by optimizing only 200 trainable parameters
randomly projected back into the full space, we can tune a RoBERTa model to
achieve 90\% of the full parameter performance levels on MRPC. Furthermore, we
empirically show that pre-training implicitly minimizes intrinsic dimension
and, perhaps surprisingly, larger models tend to have lower intrinsic dimension
after a fixed number of pre-training updates, at least in part explaining their
extreme effectiveness. Lastly, we connect intrinsic dimensionality with low
dimensional task representations and compression based generalization bounds to
provide intrinsic-dimension-based generalization bounds that are independent of
the full parameter count.
Related papers
- Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation [12.07880147193174]
We show that by leveraging the inherent low-dimensional structures of data and compressible dynamics within the model parameters, we can reap the benefits of over parameterization without the computational burdens.
We demonstrate the effectiveness of this approach for deep low-rank matrix completion as well as fine-tuning language models.
arXiv Detail & Related papers (2024-06-06T14:29:49Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of
Large-Scale Pre-Trained Language Models [19.640997611256168]
We propose AlphaTuning, consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task.
Specifically, AlphaTuning works by employing binary-coding quantization, which factorizes the full-precision parameters into binary parameters and a separate set of scaling factors.
We demonstrate that AlphaTuning, when applied to GPT-2 and OPT, performs competitively with full fine-tuning on a variety of downstream tasks while achieving >10x compression ratio under 4-bit quantization and >1,000x reduction in the number of trainable parameters.
arXiv Detail & Related papers (2022-10-08T00:36:00Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Misspecification-robust likelihood-free inference in high dimensions [13.934999364767918]
We introduce an extension of the popular Bayesian optimisation based approach to approximate discrepancy functions in a probabilistic manner.
Our approach achieves computational scalability for higher dimensional parameter spaces by using separate acquisition functions and discrepancies for each parameter.
The method successfully performs computationally efficient inference in a 100-dimensional space on canonical examples and compares favourably to existing modularised ABC methods.
arXiv Detail & Related papers (2020-02-21T16:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.