On Optimal Caching and Model Multiplexing for Large Model Inference
- URL: http://arxiv.org/abs/2306.02003v2
- Date: Tue, 29 Aug 2023 00:59:44 GMT
- Title: On Optimal Caching and Model Multiplexing for Large Model Inference
- Authors: Banghua Zhu, Ying Sheng, Lianmin Zheng, Clark Barrett, Michael I.
Jordan, Jiantao Jiao
- Abstract summary: Large Language Models (LLMs) and other large foundation models have achieved noteworthy success, but their size exacerbates existing resource consumption and latency challenges.
We study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model multiplexer to choose from an ensemble of models for query processing.
- Score: 66.50550915522551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) and other large foundation models have achieved
noteworthy success, but their size exacerbates existing resource consumption
and latency challenges. In particular, the large-scale deployment of these
models is hindered by the significant resource requirements during inference.
In this paper, we study two approaches for mitigating these challenges:
employing a cache to store previous queries and learning a model multiplexer to
choose from an ensemble of models for query processing.
Theoretically, we provide an optimal algorithm for jointly optimizing both
approaches to reduce the inference cost in both offline and online tabular
settings. By combining a caching algorithm, namely Greedy Dual Size with
Frequency (GDSF) or Least Expected Cost (LEC), with a model multiplexer, we
achieve optimal rates in both offline and online settings. Empirically,
simulations show that the combination of our caching and model multiplexing
algorithms greatly improves over the baselines, with up to $50\times$
improvement over the baseline when the ratio between the maximum cost and
minimum cost is $100$. Experiments on real datasets show a $4.3\times$
improvement in FLOPs over the baseline when the ratio for FLOPs is $10$, and a
$1.8\times$ improvement in latency when the ratio for average latency is
$1.85$.
Related papers
- Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient [9.519619751861333]
We propose a state space model (SSM) based world model based on Mamba.
It achieves $O(n)$ memory and computational complexity while effectively capturing long-term dependencies.
This model is accessible and can be trained on an off-the-shelf laptop.
arXiv Detail & Related papers (2024-10-11T15:10:40Z) - Revisiting Cascaded Ensembles for Efficient Inference [32.914852531806]
A common approach to make machine learning inference more efficient is to use example-specific adaptive schemes.
In this work we study a simple scheme for adaptive inference.
We build a cascade of ensembles (CoE), beginning with resource-efficient models and growing to larger, more expressive models.
arXiv Detail & Related papers (2024-07-02T15:14:12Z) - TrimCaching: Parameter-sharing AI Model Caching in Wireless Edge Networks [36.39118138582416]
Next-generation mobile networks are expected to facilitate fast AI model downloading to end users.
By caching models on edge servers, mobile networks can deliver models to end users with low latency.
We develop a novel model placement scheme, called parameter-sharing model caching (TrimCaching)
arXiv Detail & Related papers (2024-05-07T04:08:49Z) - HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM
Inference [68.59839755875252]
HiRE comprises of two novel components: (i) a compression scheme to cheaply predict top-$k$ rows/columns with high recall, followed by full computation restricted to the predicted subset, and (ii) DA-TOP-$k$: an efficient multi-device approximate top-$k$ operator.
We demonstrate that on a one billion parameter model, HiRE applied to both the softmax as well as feedforward layers, achieves almost matching pretraining and downstream accuracy, and speeds up inference latency by $1.47times$ on a single TPUv5e device.
arXiv Detail & Related papers (2024-02-14T18:04:36Z) - TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models [52.454274602380124]
Diffusion models heavily depend on the time-step $t$ to achieve satisfactory multi-round denoising.
We propose a Temporal Feature Maintenance Quantization (TFMQ) framework building upon a Temporal Information Block.
Powered by the pioneering block design, we devise temporal information aware reconstruction (TIAR) and finite set calibration (FSC) to align the full-precision temporal features.
arXiv Detail & Related papers (2023-11-27T12:59:52Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - Exploring Sparse Expert Models and Beyond [51.90860155810848]
Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost.
We propose a simple method called expert prototyping that splits experts into different prototypes and applies $k$ top-$1$ routing.
This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models.
arXiv Detail & Related papers (2021-05-31T16:12:44Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Multi-Purchase Behavior: Modeling, Estimation and Optimization [0.9337154228221861]
We present a parsimonious multi-purchase family of choice models called the Bundle-MVL-K family.
We develop a binary search based iterative strategy that efficiently computes optimized recommendations for this model.
arXiv Detail & Related papers (2020-06-14T23:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.