DeltaProduct: Increasing the Expressivity of DeltaNet Through Products of Householders
- URL: http://arxiv.org/abs/2502.10297v1
- Date: Fri, 14 Feb 2025 16:59:05 GMT
- Title: DeltaProduct: Increasing the Expressivity of DeltaNet Through Products of Householders
- Authors: Julien Siems, Timur Carstensen, Arber Zela, Frank Hutter, Massimiliano Pontil, Riccardo Grazzi,
- Abstract summary: Linear Recurrent Neural Networks (linear RNNs) have emerged as competitive alternatives to Transformers for sequence modeling.
Existing architectures face a fundamental trade-off between expressivity and efficiency, dictated by the structure of their state-transition matrices.
- Score: 63.66021758150632
- License:
- Abstract: Linear Recurrent Neural Networks (linear RNNs) have emerged as competitive alternatives to Transformers for sequence modeling, offering efficient training and linear-time inference. However, existing architectures face a fundamental trade-off between expressivity and efficiency, dictated by the structure of their state-transition matrices. While diagonal matrices used in architectures like Mamba, GLA, or mLSTM yield fast runtime, they suffer from severely limited expressivity. To address this, recent architectures such as (Gated) DeltaNet and RWKVv7 adopted a diagonal plus rank-1 structure, allowing simultaneous token-channel mixing, which overcomes some expressivity limitations with only a slight decrease in training efficiency. Building on the interpretation of DeltaNet's recurrence as performing one step of online gradient descent per token on an associative recall loss, we introduce DeltaProduct, which instead takes multiple ($n_h$) steps per token. This naturally leads to diagonal plus rank-$n_h$ state-transition matrices, formed as products of $n_h$ generalized Householder transformations, providing a tunable mechanism to balance expressivity and efficiency and a stable recurrence. Through extensive experiments, we demonstrate that DeltaProduct achieves superior state-tracking and language modeling capabilities while exhibiting significantly improved length extrapolation compared to DeltaNet. Additionally, we also strengthen the theoretical foundation of DeltaNet's expressivity by proving that it can solve dihedral group word problems in just two layers.
Related papers
- Gated Delta Networks: Improving Mamba2 with Delta Rule [64.58149707073915]
Gated DeltaNet consistently surpasses existing models like Mamba2 and DeltaNet across multiple benchmarks.
We develop hybrid architectures that combine Gated DeltaNet layers with sliding window attention or Mamba2 layers, achieving both improved training efficiency and superior task performance.
arXiv Detail & Related papers (2024-12-09T13:09:04Z) - Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues [65.41946981594567]
Linear Recurrent Neural Networks (LRNNs) have emerged as efficient alternatives to Transformers in large language modeling.
LRNNs struggle to perform state-tracking which may impair performance in tasks such as code evaluation or tracking a chess game.
Our work enhances the expressivity of modern LRNNs, broadening their applicability without changing the cost of training or inference.
arXiv Detail & Related papers (2024-11-19T14:35:38Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - Autoregressive + Chain of Thought = Recurrent: Recurrence's Role in Language Models' Computability and a Revisit of Recurrent Transformer [29.970200877158764]
We investigate the influence of recurrent structures in neural models on their reasoning abilities and computability.
We shed light on how the CoT approach can mimic recurrent computation and act as a bridge between autoregression and recurrence.
arXiv Detail & Related papers (2024-09-14T00:30:57Z) - Parallelizing Linear Transformers with the Delta Rule over Sequence Length [49.88826673324244]
This work describes a hardware-efficient algorithm for training linear transformers with the delta rule.
We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines.
arXiv Detail & Related papers (2024-06-10T17:24:42Z) - Efficient generative adversarial networks using linear additive-attention Transformers [0.8287206589886879]
We present a novel GAN architecture based on a linear attention Transformer block named Ladaformer.
LadaGAN consistently outperforms existing convolutional and Transformer GANs on benchmark datasets at different resolutions.
LadaGAN shows competitive performance compared to state-of-the-art multi-step generative models.
arXiv Detail & Related papers (2024-01-17T21:08:41Z) - Monotone operator equilibrium networks [97.86610752856987]
We develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ)
We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem.
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
arXiv Detail & Related papers (2020-06-15T17:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.