QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
- URL: http://arxiv.org/abs/2310.16795v1
- Date: Wed, 25 Oct 2023 17:24:53 GMT
- Title: QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
- Authors: Elias Frantar and Dan Alistarh
- Abstract summary: Mixture-of-Experts (MoE) architectures offer a general solution to the high inference costs of large language models (LLMs) via sparse routing.
We present a solution to this memory problem, in form of a new compression and execution framework called QMoE.
- Score: 64.34635279436054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mixture-of-Experts (MoE) architectures offer a general solution to the high
inference costs of large language models (LLMs) via sparse routing, bringing
faster and more accurate models, at the cost of massive parameter counts. For
example, the SwitchTransformer-c2048 model has 1.6 trillion parameters,
requiring 3.2TB of accelerator memory to run efficiently, which makes practical
deployment challenging and expensive. In this paper, we present a solution to
this memory problem, in form of a new compression and execution framework
called QMoE. Specifically, QMoE consists of a scalable algorithm which
accurately compresses trillion-parameter MoEs to less than 1 bit per parameter,
in a custom format co-designed with bespoke GPU decoding kernels to facilitate
efficient end-to-end compressed inference, with minor runtime overheads
relative to uncompressed execution. Concretely, QMoE can compress the 1.6
trillion parameter SwitchTransformer-c2048 model to less than 160GB (20x
compression, 0.8 bits per parameter) at only minor accuracy loss, in less than
a day on a single GPU. This enables, for the first time, the execution of a
trillion-parameter model on affordable commodity hardware, like a single server
with 4x NVIDIA A6000 or 8x NVIDIA 3090 GPUs, at less than 5% runtime overhead
relative to ideal uncompressed inference. The source code and compressed models
are available at github.com/IST-DASLab/qmoe.
Related papers
- BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments [53.71158537264695]
Large language models (LLMs) have revolutionized numerous applications, yet their deployment remains challenged by memory constraints on local devices.
We introduce textbfBitStack, a novel, training-free weight compression approach that enables megabyte-level trade-offs between memory usage and model performance.
arXiv Detail & Related papers (2024-10-31T13:26:11Z) - MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models [58.3342517278868]
This paper describes the design of Mixed-precision AutoRegressive LINear kernels.
It shows that batchsizes up to 16-32 can be supported with close to maximum ($4times$) quantization speedup.
MarLIN accomplishes this via a combination of techniques, such as asynchronous memory access, complex task scheduling and pipelining.
arXiv Detail & Related papers (2024-08-21T16:10:41Z) - MoDeGPT: Modular Decomposition for Large Language Model Compression [59.361006801465344]
This paper introduces textbfModular bfDecomposition (MoDeGPT), a novel structured compression framework.
MoDeGPT partitions the Transformer block into modules comprised of matrix pairs and reduces the hidden dimensions.
Our experiments show MoDeGPT, without backward propagation, matches or surpasses previous structured compression methods.
arXiv Detail & Related papers (2024-08-19T01:30:14Z) - Practical offloading for fine-tuning LLM on commodity GPU via learned subspace projectors [11.938205508966808]
Fine-tuning large language models (LLMs) requires significant memory, often exceeding the capacity of a single GPU.
We present an offloading framework, LSP_Offload, that enables near-native speed LLM fine-tuning on commodity hardware.
arXiv Detail & Related papers (2024-06-14T16:59:11Z) - SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight
Compression [76.73007709690306]
We introduce the Sparse-Quantized Representation (SpQR), a new compressed format and quantization technique.
SpQR achieves relative accuracy losses of less than 1% in perplexity for highly-accurate LLaMA and Falcon LLMs.
This makes it possible to run 33B parameter LLM on a single 24 GB consumer GPU without any performance degradation at 15% speedup.
arXiv Detail & Related papers (2023-06-05T17:53:28Z) - The case for 4-bit precision: k-bit Inference Scaling Laws [75.4335600212427]
Quantization methods reduce the number of bits required to represent each parameter in a model.
The final model size depends on both the number of parameters of the original model and the rate of compression.
We run more than 35,000 zero-shot experiments with 16-bit inputs and k-bit parameters to examine which quantization methods improve scaling for 3 to 8-bit precision.
arXiv Detail & Related papers (2022-12-19T18:48:33Z) - GPTQ: Accurate Post-Training Quantization for Generative Pre-trained
Transformers [34.91478831993398]
GPTQ is a new one-shot weight quantization method based on approximate second-order information.
It can quantize GPT models with 175 billion parameters in approximately four GPU hours.
Our method more than doubles the compression gains relative to previously-proposed one-shot quantization methods.
arXiv Detail & Related papers (2022-10-31T13:42:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.