Towards Optimizing the Costs of LLM Usage
- URL: http://arxiv.org/abs/2402.01742v1
- Date: Mon, 29 Jan 2024 16:36:31 GMT
- Title: Towards Optimizing the Costs of LLM Usage
- Authors: Shivanshu Shekhar, Tanishq Dubey, Koyel Mukherjee, Apoorv Saxena,
Atharv Tyagi, Nishanth Kotla
- Abstract summary: We study optimization problems trading off the quality and costs, both theoretically and empirically.
We propose several deterministics for reducing tokens in a quality aware manner.
Our methods reduce costs by 40%- 90% while improving quality by 4%-7%.
- Score: 4.032848774697859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI and LLMs in particular are heavily used nowadays for various
document processing tasks such as question answering and summarization.
However, different LLMs come with different capabilities for different tasks as
well as with different costs, tokenization, and latency. In fact, enterprises
are already incurring huge costs of operating or using LLMs for their
respective use cases.
In this work, we propose optimizing the usage costs of LLMs by estimating
their output quality (without actually invoking the LLMs), and then solving an
optimization routine for the LLM selection to either keep costs under a budget,
or minimize the costs, in a quality and latency aware manner. We propose a
model to predict the output quality of LLMs on document processing tasks like
summarization, followed by an LP rounding algorithm to optimize the selection
of LLMs. We study optimization problems trading off the quality and costs, both
theoretically and empirically. We further propose a sentence simplification
model for reducing the number of tokens in a controlled manner. Additionally,
we propose several deterministic heuristics for reducing tokens in a quality
aware manner, and study the related optimization problem of applying the
heuristics optimizing the quality and cost trade-off. We perform extensive
empirical validation of our methods on not only enterprise datasets but also on
open-source datasets, annotated by us, and show that we perform much better
compared to closest baselines. Our methods reduce costs by 40%- 90% while
improving quality by 4%-7%. We will release the annotated open source datasets
to the community for further research and exploration.
Related papers
- Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs [74.35290684163718]
A primary challenge in large language model (LLM) development is their onerous pre-training cost.
This paper explores a promising paradigm to improve LLM pre-training efficiency and quality by leveraging a small language model (SLM)
arXiv Detail & Related papers (2024-10-24T14:31:52Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - OptLLM: Optimal Assignment of Queries to Large Language Models [12.07164196530872]
We propose a framework for addressing the cost-effective query allocation problem for large language models (LLMs)
Our framework, named OptLLM, provides users with a range of optimal solutions to choose from, aligning with their budget constraints and performance preferences.
To evaluate the effectiveness of OptLLM, we conduct extensive experiments on various types of tasks, including text classification, question answering, sentiment analysis, reasoning, and log parsing.
arXiv Detail & Related papers (2024-05-24T01:05:37Z) - Optimising Calls to Large Language Models with Uncertainty-Based Two-Tier Selection [80.63946798650653]
Decision centers on whether to use a large LLM with better performance or a smaller one with reduced costs.
We propose a simpler solution; we use only the uncertainty of the generations of the small LLM as the decision criterion.
Our experiments reveal this simple solution optimally balances cost and performance, outperforming existing methods on 25 out of 27 experimental setups.
arXiv Detail & Related papers (2024-05-03T14:38:59Z) - Assessing Economic Viability: A Comparative Analysis of Total Cost of Ownership for Domain-Adapted Large Language Models versus State-of-the-art Counterparts in Chip Design Coding Assistance [10.364901568556435]
This paper presents a comparative analysis of total cost of ownership (TCO) and performance between domain-adapted large language models (LLM) and state-of-the-art (SoTA) LLMs.
arXiv Detail & Related papers (2024-04-12T23:37:56Z) - SMART: Automatically Scaling Down Language Models with Accuracy Guarantees for Reduced Processing Fees [21.801053526411415]
Large Language Models (LLMs) have significantly boosted performance in natural language processing (NLP) tasks.
The deployment of high-performance LLMs incurs substantial costs, primarily due to the increased number of parameters aimed at enhancing model performance.
We introduce SMART, a novel framework designed to minimize the inference costs of NLP tasks while ensuring sufficient result quality.
arXiv Detail & Related papers (2024-03-11T17:45:47Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - On Leveraging Large Language Models for Enhancing Entity Resolution: A Cost-efficient Approach [7.996010840316654]
We propose an uncertainty reduction framework using Large Language Models (LLMs) to improve entity resolution results.
LLMs capitalize on their advanced linguistic capabilities and a pay-as-you-go'' model that provides significant advantages to those without extensive data science expertise.
We show that our method is efficient and effective, offering promising applications in real-world tasks.
arXiv Detail & Related papers (2024-01-07T09:06:58Z) - Impact of Large Language Models on Generating Software Specifications [14.88090169737112]
Large Language Models (LLMs) have been successfully applied to numerous software engineering tasks.
We evaluate the capabilities of LLMs for generating software specifications from software comments or documentation.
arXiv Detail & Related papers (2023-06-06T00:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.