Pre-training Small Base LMs with Fewer Tokens
- URL: http://arxiv.org/abs/2404.08634v1
- Date: Fri, 12 Apr 2024 17:53:34 GMT
- Title: Pre-training Small Base LMs with Fewer Tokens
- Authors: Sunny Sanyal, Sujay Sanghavi, Alexandros G. Dimakis,
- Abstract summary: We study the effectiveness of a simple approach to develop a small base language model (LM) starting from an existing large base LM.
We call our simple recipe Inheritune and first demonstrate it for building a small base LM with 1.5B parameters using 1B tokens.
We show that smaller LMs trained utilizing some of the layers of GPT2-medium (355M) and GPT-2-large (770M) can effectively match the val loss of their bigger counterparts when trained from scratch.
- Score: 63.81067268919042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the effectiveness of a simple approach to develop a small base language model (LM) starting from an existing large base LM: first inherit a few transformer blocks from the larger LM, and then train this smaller model on a very small subset (0.1\%) of the raw pretraining data of the larger model. We call our simple recipe Inheritune and first demonstrate it for building a small base LM with 1.5B parameters using 1B tokens (and a starting few layers of larger LM of 3B parameters); we do this using a single A6000 GPU for less than half a day. Across 9 diverse evaluation datasets as well as the MMLU benchmark, the resulting model compares favorably to publicly available base models of 1B-2B size, some of which have been trained using 50-1000 times more tokens. We investigate Inheritune in a slightly different setting where we train small LMs utilizing larger LMs and their full pre-training dataset. Here we show that smaller LMs trained utilizing some of the layers of GPT2-medium (355M) and GPT-2-large (770M) can effectively match the val loss of their bigger counterparts when trained from scratch for the same number of training steps on OpenWebText dataset with 9B tokens. We analyze our recipe with extensive experiments and demonstrate it efficacy on diverse settings. Our code is available at https://github.com/sanyalsunny111/LLM-Inheritune.
Related papers
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation [32.01836613286288]
This work presents a Fully BInarized Large Language Model (FBI-LLM)
It demonstrates for the first time how to train a large-scale binary language model from scratch.
arXiv Detail & Related papers (2024-07-09T17:59:48Z) - BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models [2.2863439039616127]
Probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training.
Previous approaches rely on the objective function used in pre-training LMs.
We propose an approach that uses an LM's inherent ability to estimate the log-likelihood of any given textual statement.
arXiv Detail & Related papers (2024-04-05T14:13:55Z) - Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought [51.240387516059535]
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., 1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks.
We optimize the model through 1) knowledge distillation and 2) reinforcement learning from rationale-oriented and task-oriented reward signals.
arXiv Detail & Related papers (2024-04-04T12:46:37Z) - Simple and Scalable Strategies to Continually Pre-train Large Language Models [20.643648785602462]
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start the process over again once new data becomes available.
We show that a simple and scalable combination of learning rate re-warming, LR re-decaying, and replay of previous data is sufficient to match the performance of fully re-training from scratch.
arXiv Detail & Related papers (2024-03-13T17:58:57Z) - Ensemble-Instruct: Generating Instruction-Tuning Data with a
Heterogeneous Mixture of LMs [23.38507910115345]
In-context learning (ICL) techniques can train strong conversational agents with only a small amount of human supervision.
Here we explore the application of such techniques to language models that are much smaller (around 10B--40B parameters) and have permissive licenses.
We find the Self-Instruct approach to be less effective at these sizes and propose new ICL methods that draw on two main ideas.
arXiv Detail & Related papers (2023-10-21T10:21:17Z) - CombLM: Adapting Black-Box Language Models through Small Fine-Tuned
Models [43.28607973774104]
Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model.
We present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations.
Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network.
arXiv Detail & Related papers (2023-05-23T06:32:55Z) - Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language
Models [106.65127123304842]
Branch-Train-Merge (BTM) is an efficient algorithm for parallel training of large language models (LLMs)
BTM learns a set of independent expert LMs (ELMs), each specialized to a different textual domain.
Experiments show that BTM improves in- and out-of-domain perplexities as compared to GPT-style Transformer LMs.
arXiv Detail & Related papers (2022-08-05T17:46:38Z) - CPM-2: Large-scale Cost-effective Pre-trained Language Models [71.59893315671997]
We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference.
We introduce knowledge inheritance to accelerate the pre-training process by exploiting existing PLMs instead of training models from scratch.
We implement a new inference toolkit, namely InfMoE, for using large-scale PLMs with limited computational resources.
arXiv Detail & Related papers (2021-06-20T15:43:54Z) - Neural Semi-supervised Learning for Text Classification Under
Large-Scale Pretraining [51.19885385587916]
We conduct studies on semi-supervised learning in the task of text classification under the context of large-scale LM pretraining.
Our work marks an initial step in understanding the behavior of semi-supervised learning models under the context of large-scale pretraining.
arXiv Detail & Related papers (2020-11-17T13:39:05Z) - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators [108.3381301768299]
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens.
We propose a more sample-efficient pre-training task called replaced token detection.
arXiv Detail & Related papers (2020-03-23T21:17:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.