Efficiency at Scale: Investigating the Performance of Diminutive
Language Models in Clinical Tasks
- URL: http://arxiv.org/abs/2402.10597v1
- Date: Fri, 16 Feb 2024 11:30:11 GMT
- Title: Efficiency at Scale: Investigating the Performance of Diminutive
Language Models in Clinical Tasks
- Authors: Niall Taylor, Upamanyu Ghose, Omid Rohanian, Mohammadmahdi Nouriborji,
Andrey Kormilitzin, David Clifton, Alejo Nevado-Holgado
- Abstract summary: We present an investigation into the suitability of different PEFT methods to clinical decision-making tasks.
Our analysis shows that the performance of most PEFT approaches varies significantly from one task to another.
The effectiveness of PEFT methods in the clinical domain is evident, particularly for specialised models which can operate on low-cost, in-house computing infrastructure.
- Score: 2.834743715323873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The entry of large language models (LLMs) into research and commercial spaces
has led to a trend of ever-larger models, with initial promises of
generalisability, followed by a widespread desire to downsize and create
specialised models without the need for complete fine-tuning, using Parameter
Efficient Fine-tuning (PEFT) methods. We present an investigation into the
suitability of different PEFT methods to clinical decision-making tasks, across
a range of model sizes, including extremely small models with as few as $25$
million parameters.
Our analysis shows that the performance of most PEFT approaches varies
significantly from one task to another, with the exception of LoRA, which
maintains relatively high performance across all model sizes and tasks,
typically approaching or matching full fine-tuned performance. The
effectiveness of PEFT methods in the clinical domain is evident, particularly
for specialised models which can operate on low-cost, in-house computing
infrastructure. The advantages of these models, in terms of speed and reduced
training costs, dramatically outweighs any performance gain from large
foundation LLMs. Furthermore, we highlight how domain-specific pre-training
interacts with PEFT methods and model size, and discuss how these factors
interplay to provide the best efficiency-performance trade-off. Full code
available at: tbd.
Related papers
- Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models [19.163639128631534]
Importance-aware Sparse Tuning (IST) is a plug-and-play technique compatible with various PEFT methods that operate on a per-layer basis.
IST dynamically updates selected layers in PEFT modules, leading to reduced memory demands.
arXiv Detail & Related papers (2024-10-15T16:53:26Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition [56.87609859444084]
parameter-efficient fine-tuning (PEFT) focuses on optimizing a select subset of parameters while keeping the rest fixed, significantly lowering computational and storage overheads.
We take the first step to unify all approaches by dissecting them from a decomposition perspective.
We introduce two novel PEFT methods alongside a simple yet effective framework designed to enhance the performance of PEFT techniques across various applications.
arXiv Detail & Related papers (2024-07-07T15:44:42Z) - The Role of Model Architecture and Scale in Predicting Molecular Properties: Insights from Fine-Tuning RoBERTa, BART, and LLaMA [0.0]
This study introduces a systematic framework to compare the efficacy of Large Language Models (LLMs) for fine-tuning across various cheminformatics tasks.
We assessed three well-known models-RoBERTa, BART, and LLaMA-on their ability to predict molecular properties.
We found that LLaMA-based models generally offered the lowest validation loss, suggesting their superior adaptability across tasks and scales.
arXiv Detail & Related papers (2024-05-02T02:20:12Z) - Astraios: Parameter-Efficient Instruction Tuning Code Large Language
Models [21.17021844323919]
We introduce Astraios, a suite of 28 instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up to 16 billion parameters.
We find that FFT leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale.
arXiv Detail & Related papers (2024-01-01T15:30:19Z) - Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
A Critical Review and Assessment [12.674032145667763]
We present a comprehensive and systematic review of Efficient Fine-Tuning (PEFT) methods for pretrained language models (PLMs)
PEFT offers an effective solution by reducing the number of fine-tuning parameters and memory usage while achieving comparable performance to full fine-tuning.
We conduct experiments using several representative PEFT methods to better understand their effectiveness in parameter efficiency and memory efficiency.
arXiv Detail & Related papers (2023-12-19T13:31:24Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Retrieval-based Knowledge Transfer: An Effective Approach for Extreme
Large Language Model Compression [64.07696663255155]
Large-scale pre-trained language models (LLMs) have demonstrated exceptional performance in various natural language processing (NLP) tasks.
However, the massive size of these models poses huge challenges for their deployment in real-world applications.
We introduce a novel compression paradigm called Retrieval-based Knowledge Transfer (RetriKT) which effectively transfers the knowledge of LLMs to extremely small-scale models.
arXiv Detail & Related papers (2023-10-24T07:58:20Z) - Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques
for LLMs [1.867982979635437]
We provide a benchmark of various PEFT techniques and evaluate model performance across different data scales.
Contrary to popular belief, we empirically prove that PEFT techniques converge slower than full tuning in low data scenarios.
We further optimize these PEFT techniques by selectively choosing which parts of the model to train, and find that these techniques can be applied with significantly fewer parameters.
arXiv Detail & Related papers (2023-04-28T17:39:49Z) - UniPELT: A Unified Framework for Parameter-Efficient Language Model
Tuning [64.638804236566]
We propose a unified framework, UniPELT, which incorporates different PELT methods as submodules and learns to activate the ones that best suit the current data or task setup.
Remarkably, on the GLUE benchmark, UniPELT consistently achieves 13pt gains compared to the best individual PELT method that it incorporates and even outperforms fine-tuning under different setups.
arXiv Detail & Related papers (2021-10-14T17:40:08Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.