BlackMamba: Mixture of Experts for State-Space Models
- URL: http://arxiv.org/abs/2402.01771v1
- Date: Thu, 1 Feb 2024 07:15:58 GMT
- Title: BlackMamba: Mixture of Experts for State-Space Models
- Authors: Quentin Anthony, Yury Tokpanov, Paolo Glorioso, Beren Millidge
- Abstract summary: State-space models (SSMs) have recently demonstrated competitive performance to transformers at large-scale language modeling benchmarks.
MoE models have shown remarkable performance while significantly reducing the compute and latency costs of inference.
We present BlackMamba, a novel architecture that combines the Mamba SSM with MoE to obtain the benefits of both.
- Score: 10.209192169793772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-space models (SSMs) have recently demonstrated competitive performance
to transformers at large-scale language modeling benchmarks while achieving
linear time and memory complexity as a function of sequence length. Mamba, a
recently released SSM model, shows impressive performance in both language
modeling and long sequence processing tasks. Simultaneously, mixture-of-expert
(MoE) models have shown remarkable performance while significantly reducing the
compute and latency costs of inference at the expense of a larger memory
footprint. In this paper, we present BlackMamba, a novel architecture that
combines the Mamba SSM with MoE to obtain the benefits of both. We demonstrate
that BlackMamba performs competitively against both Mamba and transformer
baselines, and outperforms in inference and training FLOPs. We fully train and
open-source 340M/1.5B and 630M/2.8B BlackMamba models on 300B tokens of a
custom dataset. We show that BlackMamba inherits and combines both of the
benefits of SSM and MoE architectures, combining linear-complexity generation
from SSM with cheap and fast inference from MoE. We release all weights,
checkpoints, and inference code open-source. Inference code at:
https://github.com/Zyphra/BlackMamba
Related papers
- Sparse Mamba: Reinforcing Controllability In Structural State Space Models [2.6353853440763118]
We introduce the concept of controllability and observability to the Mamba SSM's architecture in our Sparse-Mamba (S-Mamba) for natural language processing (NLP) applications.
arXiv Detail & Related papers (2024-08-31T23:25:12Z) - LaMamba-Diff: Linear-Time High-Fidelity Diffusion Models Based on Local Attention and Mamba [54.85262314960038]
Local Attentional Mamba blocks capture both global contexts and local details with linear complexity.
Our model exhibits exceptional scalability and surpasses the performance of DiT across various model scales on ImageNet at 256x256 resolution.
Compared to state-of-the-art diffusion models on ImageNet 256x256 and 512x512, our largest model presents notable advantages, such as a reduction of up to 62% GFLOPs.
arXiv Detail & Related papers (2024-08-05T16:39:39Z) - MambaVision: A Hybrid Mamba-Transformer Vision Backbone [54.965143338206644]
We propose a novel hybrid Mamba-Transformer backbone, denoted as MambaVision, which is specifically tailored for vision applications.
Our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features.
We conduct a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba.
arXiv Detail & Related papers (2024-07-10T23:02:45Z) - ReMamber: Referring Image Segmentation with Mamba Twister [51.291487576255435]
ReMamber is a novel RIS architecture that integrates the power of Mamba with a multi-modal Mamba Twister block.
The Mamba Twister explicitly models image-text interaction, and fuses textual and visual features through its unique channel and spatial twisting mechanism.
arXiv Detail & Related papers (2024-03-26T16:27:37Z) - SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series [2.4379295576598436]
We propose SiMBA, a new architecture that introduces Einstein FFT (EinFFT) for channel modeling by specific eigenvalue computations and uses the Mamba block for sequence modeling.
We show that SiMBA outperforms existing SSMs, bridging the performance gap with state-of-the-art transformers.
arXiv Detail & Related papers (2024-03-22T17:22:56Z) - ZigMa: A DiT-style Zigzag Mamba Diffusion Model [23.581004543220622]
We aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation.
We introduce a simple, plug-and-play, zero- parameter method named Zigzag Mamba, which outperforms Mamba-based baselines.
We integrate Zigzag Mamba with Interpolant framework to investigate the scalability of the model on large-resolution visual datasets.
arXiv Detail & Related papers (2024-03-20T17:59:14Z) - LightM-UNet: Mamba Assists in Lightweight UNet for Medical Image
Segmentation [10.563051220050035]
We introduce the Lightweight Mamba UNet (LightM-UNet) that integrates Mamba and UNet in a lightweight framework.
Specifically, LightM-UNet leverages the Residual Vision Mamba Layer in a pure Mamba fashion to extract deep semantic features and model long-range spatial dependencies.
Experiments conducted on two real-world 2D/3D datasets demonstrate that LightM-UNet surpasses existing state-of-the-art literature.
arXiv Detail & Related papers (2024-03-08T12:07:42Z) - The Hidden Attention of Mamba Models [54.50526986788175]
The Mamba layer offers an efficient selective state space model (SSM) that is highly effective in modeling multiple domains.
We show that such models can be viewed as attention-driven models.
This new perspective enables us to empirically and theoretically compare the underlying mechanisms to that of the self-attention layers in transformers.
arXiv Detail & Related papers (2024-03-03T18:58:21Z) - PointMamba: A Simple State Space Model for Point Cloud Analysis [65.59944745840866]
We propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks.
Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs.
arXiv Detail & Related papers (2024-02-16T14:56:13Z) - MambaByte: Token-free Selective State Space Model [71.90159903595514]
MambaByte is a token-free adaptation of the Mamba SSM trained autoregressively on byte sequences.
We show MambaByte to be competitive with, and even to outperform, state-of-the-art subword Transformers on language modeling tasks.
arXiv Detail & Related papers (2024-01-24T18:53:53Z) - MoE-Mamba: Efficient Selective State Space Models with Mixture of
Experts [4.293771840782942]
State Space Models (SSMs) have become serious contenders in the field of sequential modeling.
MoE has significantly improved Transformer-based Large Language Models, including recent state-of-the-art open models.
We propose that to unlock the potential of SSMs for scaling, they should be combined with MoE.
arXiv Detail & Related papers (2024-01-08T18:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.