Rethinking Learning Rate Tuning in the Era of Large Language Models
- URL: http://arxiv.org/abs/2309.08859v1
- Date: Sat, 16 Sep 2023 03:37:00 GMT
- Title: Rethinking Learning Rate Tuning in the Era of Large Language Models
- Authors: Hongpeng Jin, Wenqi Wei, Xuyu Wang, Wenbin Zhang, Yanzhao Wu
- Abstract summary: Large Language Models (LLMs) represent the recent success of deep learning in achieving remarkable human-like predictive performance.
It has become a mainstream strategy to leverage fine-tuning to adapt LLMs for various real-world applications.
Existing learning rate policies are primarily designed for training traditional deep neural networks (DNNs)
- Score: 11.87985768634266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) represent the recent success of deep learning in
achieving remarkable human-like predictive performance. It has become a
mainstream strategy to leverage fine-tuning to adapt LLMs for various
real-world applications due to the prohibitive expenses associated with LLM
training. The learning rate is one of the most important hyperparameters in LLM
fine-tuning with direct impacts on both fine-tuning efficiency and fine-tuned
LLM quality. Existing learning rate policies are primarily designed for
training traditional deep neural networks (DNNs), which may not work well for
LLM fine-tuning. We reassess the research challenges and opportunities of
learning rate tuning in the coming era of Large Language Models. This paper
makes three original contributions. First, we revisit existing learning rate
policies to analyze the critical challenges of learning rate tuning in the era
of LLMs. Second, we present LRBench++ to benchmark learning rate policies and
facilitate learning rate tuning for both traditional DNNs and LLMs. Third, our
experimental analysis with LRBench++ demonstrates the key differences between
LLM fine-tuning and traditional DNN training and validates our analysis.
Related papers
- Dynamic Uncertainty Ranking: Enhancing In-Context Learning for Long-Tail Knowledge in LLMs [50.29035873837]
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training.
Long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization.
We propose a reinforcement learning-based dynamic uncertainty ranking method for ICL that accounts for the varying impact of each retrieved sample on LLM predictions.
arXiv Detail & Related papers (2024-10-31T03:42:17Z) - Achieving Peak Performance for Large Language Models: A Systematic Review [0.0]
Large language models (LLMs) have achieved remarkable success in natural language processing (NLP)
As models grow into the trillion- parameter range, computational and memory costs increase significantly.
This makes it difficult for many researchers to access the resources needed to train or apply these models.
arXiv Detail & Related papers (2024-09-07T13:57:41Z) - Leveraging Large Language Models for Wireless Symbol Detection via In-Context Learning [29.28683810366379]
We propose to leverage the in-context learning ability (a.k.a. prompting) of large language models (LLMs) to solve wireless tasks in the low data regime without any training or fine-tuning.
Our results reveal that using LLMs via ICL methods generally outperforms traditional DNNs on the symbol demodulation task.
arXiv Detail & Related papers (2024-08-28T17:19:20Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Towards Optimal Learning of Language Models [124.65669486710992]
We present a theory for the optimal learning of language models (LMs)
We derive a theorem, named Learning Law, to reveal the properties of the dynamics in the optimal learning process under our objective.
We empirically verify that the optimal learning of LMs essentially stems from the improvement of the coefficients in the scaling law of LMs.
arXiv Detail & Related papers (2024-02-27T18:52:19Z) - Continual Learning for Large Language Models: A Survey [95.79977915131145]
Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.
This paper surveys recent works on continual learning for LLMs.
arXiv Detail & Related papers (2024-02-02T12:34:09Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - On Learning to Summarize with Large Language Models as References [101.79795027550959]
Large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets.
We study an LLM-as-reference learning setting for smaller text summarization models to investigate whether their performance can be substantially improved.
arXiv Detail & Related papers (2023-05-23T16:56:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.