Hyperparameter Optimization for Large Language Model Instruction-Tuning
- URL: http://arxiv.org/abs/2312.00949v2
- Date: Tue, 30 Jan 2024 21:32:31 GMT
- Title: Hyperparameter Optimization for Large Language Model Instruction-Tuning
- Authors: Christophe Tribes, Sacha Benarroch-Lelong, Peng Lu, Ivan Kobyzev
- Abstract summary: We study the whole pipeline of performing fine-tuning and validation on a pre-trained LLM as a blackbox.
We efficiently explore the space of hyper parameters with the nomad algorithm, achieving a boost in performance and human alignment of the tuned model.
- Score: 6.743825167463901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fine-tuning of Large Language Models (LLMs) has enabled them to recently
achieve milestones in natural language processing applications. The emergence
of ever larger LLMs has paved the way for more efficient fine-tuning methods.
Among these, the Low-Rank Adaptation (LoRA) method keeps most of the weights of
the pre-trained LLM frozen while introducing a low-rank decomposition of the
weight matrix, enabling the tuning of only a very small proportion of the
network. The performance on downstream tasks of models fine-tuned with LoRA
heavily relies on a set of hyperparameters including the rank of the
decomposition. In this work, we investigate the choice of these hyperparameters
through two main blackbox optimization (BBO) techniques. We examine the whole
pipeline of performing fine-tuning and validation on a pre-trained LLM as a
blackbox and efficiently explore the space of hyperparameters with the \nomad
algorithm, achieving a boost in performance and human alignment of the tuned
model.
Related papers
- RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [95.32315448601241]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)
RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.
Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization [26.16200284965289]
Large language models achieve state-of-the-art performance but are costly to fine-tune due to their size.
We propose Ultra-Low-dimensional Prompt Tuning (ULPT), which optimize prompts in a low-dimensional space (e.g., 2D) and use a random but frozen matrix for the up-projection.
Our theoretical analysis shows that random projections can capture high-rank structures effectively, and experimental results demonstrate U's competitive performance over existing parameter-efficient methods.
arXiv Detail & Related papers (2025-02-06T21:00:29Z) - Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs [75.11449420928139]
Fine-tuning Large Language Models (LLMs) has become a crucial technique for adapting pre-trained models to downstream tasks.
Low-Rank Adaptation (LoRA) has emerged as a promising solution, but there exists a gap between the practical performance of low-rank adaptations and its theoretical optimum.
We propose eXtreme Gradient Boosting LoRA, a novel framework that bridges this gap by leveraging the power of ensemble learning.
arXiv Detail & Related papers (2024-10-25T17:07:13Z) - LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method.
We propose a higher-order Candecomp/Parafac (CP) decomposition, enabling a more compact and flexible representation.
Our method can achieve a reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - Search for Efficient Large Language Models [52.98684997131108]
Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research.
Weight pruning, quantization, and distillation have been embraced to compress LLMs, targeting memory reduction and inference acceleration.
Most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures.
arXiv Detail & Related papers (2024-09-25T21:32:12Z) - Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient [57.9629676017527]
We propose an optimization-based structural pruning on Large-Language Models.
We learn the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.
Our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning [143.23123791557245]
Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP.
We propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score.
We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA.
arXiv Detail & Related papers (2023-03-18T22:36:25Z) - Multi-armed bandits for resource efficient, online optimization of
language model pre-training: the use case of dynamic masking [7.3618738570222915]
We evaluate a framework for resource efficient pre-training of Transformer-based language models (TLMs)
We propose a multi-armed bandit framework for the sequential selection of TLM pre-training hyper parameters.
GP-TS provides an interactive framework for efficient and optimized TLM pre-training.
arXiv Detail & Related papers (2022-03-24T16:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.