BlackMamba: Mixture of Experts for State-Space Models
- URL: http://arxiv.org/abs/2402.01771v1
- Date: Thu, 1 Feb 2024 07:15:58 GMT
- Title: BlackMamba: Mixture of Experts for State-Space Models
- Authors: Quentin Anthony, Yury Tokpanov, Paolo Glorioso, Beren Millidge
- Abstract summary: State-space models (SSMs) have recently demonstrated competitive performance to transformers at large-scale language modeling benchmarks.
MoE models have shown remarkable performance while significantly reducing the compute and latency costs of inference.
We present BlackMamba, a novel architecture that combines the Mamba SSM with MoE to obtain the benefits of both.
- Score: 10.209192169793772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-space models (SSMs) have recently demonstrated competitive performance
to transformers at large-scale language modeling benchmarks while achieving
linear time and memory complexity as a function of sequence length. Mamba, a
recently released SSM model, shows impressive performance in both language
modeling and long sequence processing tasks. Simultaneously, mixture-of-expert
(MoE) models have shown remarkable performance while significantly reducing the
compute and latency costs of inference at the expense of a larger memory
footprint. In this paper, we present BlackMamba, a novel architecture that
combines the Mamba SSM with MoE to obtain the benefits of both. We demonstrate
that BlackMamba performs competitively against both Mamba and transformer
baselines, and outperforms in inference and training FLOPs. We fully train and
open-source 340M/1.5B and 630M/2.8B BlackMamba models on 300B tokens of a
custom dataset. We show that BlackMamba inherits and combines both of the
benefits of SSM and MoE architectures, combining linear-complexity generation
from SSM with cheap and fast inference from MoE. We release all weights,
checkpoints, and inference code open-source. Inference code at:
https://github.com/Zyphra/BlackMamba
Related papers
- MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - Mamba-CL: Optimizing Selective State Space Model in Null Space for Continual Learning [54.19222454702032]
Continual Learning aims to equip AI models with the ability to learn a sequence of tasks over time, without forgetting previously learned knowledge.
State Space Models (SSMs) have achieved notable success in computer vision.
We introduce Mamba-CL, a framework that continuously fine-tunes the core SSMs of the large-scale Mamba foundation model.
arXiv Detail & Related papers (2024-11-23T06:36:16Z) - Bi-Mamba: Towards Accurate 1-Bit State Space Models [28.478762133816726]
Bi-Mamba is a scalable and powerful 1-bit Mamba architecture designed for more efficient large language models.
Bi-Mamba achieves performance comparable to its full-precision counterparts (e.g., FP16 or BF16) and much better accuracy than post-training-binarization (PTB) Mamba baselines.
arXiv Detail & Related papers (2024-11-18T18:59:15Z) - LaMamba-Diff: Linear-Time High-Fidelity Diffusion Models Based on Local Attention and Mamba [54.85262314960038]
Local Attentional Mamba blocks capture both global contexts and local details with linear complexity.
Our model exhibits exceptional scalability and surpasses the performance of DiT across various model scales on ImageNet at 256x256 resolution.
Compared to state-of-the-art diffusion models on ImageNet 256x256 and 512x512, our largest model presents notable advantages, such as a reduction of up to 62% GFLOPs.
arXiv Detail & Related papers (2024-08-05T16:39:39Z) - SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series [2.4379295576598436]
We propose SiMBA, a new architecture that introduces Einstein FFT (EinFFT) for channel modeling by specific eigenvalue computations and uses the Mamba block for sequence modeling.
We show that SiMBA outperforms existing SSMs, bridging the performance gap with state-of-the-art transformers.
arXiv Detail & Related papers (2024-03-22T17:22:56Z) - ZigMa: A DiT-style Zigzag Mamba Diffusion Model [22.68317748373856]
We aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation.
We introduce a simple, plug-and-play, zero- parameter method named Zigzag Mamba, which outperforms Mamba-based baselines.
We integrate Zigzag Mamba with Interpolant framework to investigate the scalability of the model on large-resolution visual datasets.
arXiv Detail & Related papers (2024-03-20T17:59:14Z) - LightM-UNet: Mamba Assists in Lightweight UNet for Medical Image
Segmentation [10.563051220050035]
We introduce the Lightweight Mamba UNet (LightM-UNet) that integrates Mamba and UNet in a lightweight framework.
Specifically, LightM-UNet leverages the Residual Vision Mamba Layer in a pure Mamba fashion to extract deep semantic features and model long-range spatial dependencies.
Experiments conducted on two real-world 2D/3D datasets demonstrate that LightM-UNet surpasses existing state-of-the-art literature.
arXiv Detail & Related papers (2024-03-08T12:07:42Z) - The Hidden Attention of Mamba Models [54.50526986788175]
The Mamba layer offers an efficient selective state space model (SSM) that is highly effective in modeling multiple domains.
We show that such models can be viewed as attention-driven models.
This new perspective enables us to empirically and theoretically compare the underlying mechanisms to that of the self-attention layers in transformers.
arXiv Detail & Related papers (2024-03-03T18:58:21Z) - PointMamba: A Simple State Space Model for Point Cloud Analysis [65.59944745840866]
We propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks.
Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs.
arXiv Detail & Related papers (2024-02-16T14:56:13Z) - MambaByte: Token-free Selective State Space Model [71.90159903595514]
MambaByte is a token-free adaptation of the Mamba SSM trained autoregressively on byte sequences.
We show MambaByte to be competitive with, and even to outperform, state-of-the-art subword Transformers on language modeling tasks.
arXiv Detail & Related papers (2024-01-24T18:53:53Z) - MoE-Mamba: Efficient Selective State Space Models with Mixture of
Experts [4.293771840782942]
State Space Models (SSMs) have become serious contenders in the field of sequential modeling.
MoE has significantly improved Transformer-based Large Language Models, including recent state-of-the-art open models.
We propose that to unlock the potential of SSMs for scaling, they should be combined with MoE.
arXiv Detail & Related papers (2024-01-08T18:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.