Hyperparameter Optimization for Large Language Model Instruction-Tuning
- URL: http://arxiv.org/abs/2312.00949v2
- Date: Tue, 30 Jan 2024 21:32:31 GMT
- Title: Hyperparameter Optimization for Large Language Model Instruction-Tuning
- Authors: Christophe Tribes, Sacha Benarroch-Lelong, Peng Lu, Ivan Kobyzev
- Abstract summary: We study the whole pipeline of performing fine-tuning and validation on a pre-trained LLM as a blackbox.
We efficiently explore the space of hyper parameters with the nomad algorithm, achieving a boost in performance and human alignment of the tuned model.
- Score: 6.743825167463901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fine-tuning of Large Language Models (LLMs) has enabled them to recently
achieve milestones in natural language processing applications. The emergence
of ever larger LLMs has paved the way for more efficient fine-tuning methods.
Among these, the Low-Rank Adaptation (LoRA) method keeps most of the weights of
the pre-trained LLM frozen while introducing a low-rank decomposition of the
weight matrix, enabling the tuning of only a very small proportion of the
network. The performance on downstream tasks of models fine-tuned with LoRA
heavily relies on a set of hyperparameters including the rank of the
decomposition. In this work, we investigate the choice of these hyperparameters
through two main blackbox optimization (BBO) techniques. We examine the whole
pipeline of performing fine-tuning and validation on a pre-trained LLM as a
blackbox and efficiently explore the space of hyperparameters with the \nomad
algorithm, achieving a boost in performance and human alignment of the tuned
model.
Related papers
- Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs [75.11449420928139]
Fine-tuning Large Language Models (LLMs) has become a crucial technique for adapting pre-trained models to downstream tasks.
Low-Rank Adaptation (LoRA) has emerged as a promising solution, but there exists a gap between the practical performance of low-rank adaptations and its theoretical optimum.
We propose eXtreme Gradient Boosting LoRA, a novel framework that bridges this gap by leveraging the power of ensemble learning.
arXiv Detail & Related papers (2024-10-25T17:07:13Z) - Zeroth-Order Fine-Tuning of LLMs in Random Subspaces [66.27334633749734]
As language models grow in size, memory demands for backpropagation increase.
Zeroth-order (ZOZO) optimization methods offer a memory-efficient alternative.
We show that SubZero enhances fine-tuning and achieves faster results compared to standard ZOZO approaches.
arXiv Detail & Related papers (2024-10-11T17:01:43Z) - LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks.
We propose a novel approach that employs a low rank tensor parametrization for model updates.
Our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - Search for Efficient Large Language Models [52.98684997131108]
Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research.
Weight pruning, quantization, and distillation have been embraced to compress LLMs, targeting memory reduction and inference acceleration.
Most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures.
arXiv Detail & Related papers (2024-09-25T21:32:12Z) - Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient [57.9629676017527]
We propose an optimization-based structural pruning on Large-Language Models.
We learn the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.
Our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
Ultra-Low-Parameter Fine-Tuning of Large Language Models [20.5908375260123]
Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance.
We present LoRETTA, a framework that significantly reduces trainable parameters through tensor-train decomposition.
LoRETTA achieves comparable or better performance than most widely used PEFT methods with up to $100times$ fewer parameters on the LLaMA-2-7B models.
arXiv Detail & Related papers (2024-02-18T01:20:00Z) - AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning [143.23123791557245]
Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP.
We propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score.
We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA.
arXiv Detail & Related papers (2023-03-18T22:36:25Z) - Multi-armed bandits for resource efficient, online optimization of
language model pre-training: the use case of dynamic masking [7.3618738570222915]
We evaluate a framework for resource efficient pre-training of Transformer-based language models (TLMs)
We propose a multi-armed bandit framework for the sequential selection of TLM pre-training hyper parameters.
GP-TS provides an interactive framework for efficient and optimized TLM pre-training.
arXiv Detail & Related papers (2022-03-24T16:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.