Hyena Hierarchy: Towards Larger Convolutional Language Models
- URL: http://arxiv.org/abs/2302.10866v3
- Date: Wed, 19 Apr 2023 20:08:39 GMT
- Title: Hyena Hierarchy: Towards Larger Convolutional Language Models
- Authors: Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao,
Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher R\'e
- Abstract summary: Hyena is a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating.
In recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on state-spaces and other implicit and explicit methods.
- Score: 115.82857881546089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have relied heavily on the use of large
Transformers due to their ability to learn at scale. However, the core building
block of Transformers, the attention operator, exhibits quadratic cost in
sequence length, limiting the amount of context accessible. Existing
subquadratic methods based on low-rank and sparse approximations need to be
combined with dense attention layers to match Transformers, indicating a gap in
capability. In this work, we propose Hyena, a subquadratic drop-in replacement
for attention constructed by interleaving implicitly parametrized long
convolutions and data-controlled gating. In recall and reasoning tasks on
sequences of thousands to hundreds of thousands of tokens, Hyena improves
accuracy by more than 50 points over operators relying on state-spaces and
other implicit and explicit methods, matching attention-based models. We set a
new state-of-the-art for dense-attention-free architectures on language
modeling in standard datasets (WikiText103 and The Pile), reaching Transformer
quality with a 20% reduction in training compute required at sequence length
2K. Hyena operators are twice as fast as highly optimized attention at sequence
length 8K, and 100x faster at sequence length 64K.
Related papers
- Long Sequence Modeling with Attention Tensorization: From Sequence to Tensor Learning [20.51822826798248]
We propose to scale up the attention field by tensorizing long input sequences into compact tensor representations followed by attention on each transformed dimension.
We show that the proposed attention tensorization encodes token dependencies as a multi-hop attention process, and is equivalent to Kronecker decomposition of full attention.
arXiv Detail & Related papers (2024-10-28T11:08:57Z) - Taipan: Efficient and Expressive State Space Language Models with Selective Attention [100.16383527459429]
Long-context language modeling is a significant challenge in Natural Language Processing (NLP)
Recent State Space Models (SSMs) such as Mamba offer alternatives with constant memory usage, but they underperform in tasks requiring extensive in-context retrieval.
We introduce Taipan, a novel hybrid architecture that combines Mamba-2 with Selective Attention Layers (SALs)
Our experiments demonstrate Taipan's superior performance across various scales and tasks, offering a promising solution for efficient long-context language modeling.
arXiv Detail & Related papers (2024-10-24T09:25:37Z) - SE(3)-Hyena Operator for Scalable Equivariant Learning [5.354533854744212]
We introduce SE(3)-Hyena, an equivariant long-convolutional model based on the Hyena operator.
Our model processes the geometric context of 20k tokens x3.5 times faster than the equivariant transformer.
arXiv Detail & Related papers (2024-07-01T07:56:48Z) - Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers [4.674454841332859]
Transformer-based models have emerged as one of the most widely used architectures for natural language processing.
These huge models are memory hungry and incur significant inference latency even on cutting edge AI-accelerators.
We propose LeanAttention, a scalable technique of computing self-attention for the token-generation phase.
arXiv Detail & Related papers (2024-05-17T00:52:39Z) - Gated Linear Attention Transformers with Hardware-Efficient Training [60.670102007737476]
This work describes a hardware-efficient algorithm for linear attention that trades off memory movement against parallelizability.
We then generalize this algorithm to a more expressive variant of linear attention with data-dependent gates.
When used as a replacement for the standard attention layer in Transformers, the resulting gated linear attention Transformer is found to perform competitively.
arXiv Detail & Related papers (2023-12-11T18:51:59Z) - LongNet: Scaling Transformers to 1,000,000,000 Tokens [146.4077038371075]
LongNet is a Transformer variant that can scale sequence length to more than 1 billion tokens.
Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
arXiv Detail & Related papers (2023-07-05T17:59:38Z) - HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide
Resolution [76.97231739317259]
We present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level.
On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 18 datasets using a model with orders of magnitude less parameters and pretraining data.
arXiv Detail & Related papers (2023-06-27T20:46:34Z) - Combiner: Full Attention Transformer with Sparse Computation Cost [142.10203598824964]
We propose Combiner, which provides full attention capability in each attention head while maintaining low computation complexity.
We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention.
An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach.
arXiv Detail & Related papers (2021-07-12T22:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.