GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling
- URL: http://arxiv.org/abs/2311.01927v2
- Date: Sat, 27 Jan 2024 14:52:52 GMT
- Title: GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling
- Authors: Tobias Katsch
- Abstract summary: We develop GateLoop, a sequence model that generalizes linear recurrent models such as S4, S5, LRU and RetNet.
GateLoop empirically outperforms existing models for auto-regressive language modeling.
We prove that our approach can be interpreted as providing data-controlled relative-positional information to Attention.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear Recurrence has proven to be a powerful tool for modeling long
sequences efficiently. In this work, we show that existing models fail to take
full advantage of its potential. Motivated by this finding, we develop
GateLoop, a foundational sequence model that generalizes linear recurrent
models such as S4, S5, LRU and RetNet, by employing data-controlled state
transitions. Utilizing this theoretical advance, GateLoop empirically
outperforms existing models for auto-regressive language modeling. Our method
comes with a low-cost $O(l)$ recurrent mode and an efficient $O(l \log_{2} l)$
parallel mode making use of highly optimized associative scan implementations.
Furthermore, we derive an $O(l^2)$ surrogate attention mode, revealing
remarkable implications for Transformer and recently proposed architectures.
Specifically, we prove that our approach can be interpreted as providing
data-controlled relative-positional information to Attention. While many
existing models solely rely on data-controlled cumulative sums for context
aggregation, our findings suggest that incorporating data-controlled complex
cumulative products may be a crucial step towards more powerful sequence
models.
Related papers
- COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - Predictive Modeling in the Reservoir Kernel Motif Space [0.9217021281095907]
This work proposes a time series prediction method based on the kernel view of linear reservoirs.
We provide a geometric interpretation of our approach shedding light on how our approach is related to the core reservoir models.
Empirical experiments then compare predictive performances of our suggested model with those of recent state-of-art transformer based models.
arXiv Detail & Related papers (2024-05-11T16:12:25Z) - PanGu-$\pi$: Enhancing Language Model Architectures via Nonlinearity
Compensation [97.78045712375047]
We present a new efficient model architecture for large language models (LLMs)
We show that PanGu-$pi$-7B can achieve a comparable performance to that of benchmarks with about 10% inference speed-up.
In addition, we have deployed PanGu-$pi$-7B in the high-value domains of finance and law, developing an LLM named YunShan for practical application.
arXiv Detail & Related papers (2023-12-27T11:49:24Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Reinforcement Learning as One Big Sequence Modeling Problem [84.84564880157149]
Reinforcement learning (RL) is typically concerned with estimating single-step policies or single-step models.
We view RL as a sequence modeling problem, with the goal being to predict a sequence of actions that leads to a sequence of high rewards.
arXiv Detail & Related papers (2021-06-03T17:58:51Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Variational Model-based Policy Optimization [34.80171122943031]
Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL.
We propose an objective function as a variational lower-bound of a log-likelihood of a log-likelihood to jointly learn and improve model and policy.
Our experiments on a number of continuous control tasks show that despite being more complex, our model-based (E-step) algorithm, called emactoral model-based policy optimization (VMBPO), is more sample-efficient and
arXiv Detail & Related papers (2020-06-09T18:30:15Z) - Tensor Networks for Probabilistic Sequence Modeling [7.846449972735859]
We use a uniform matrix product state (u-MPS) model for probabilistic modeling of sequence data.
We then introduce a novel generative algorithm giving trained u-MPS the ability to efficiently sample from a wide variety of conditional distributions.
Experiments on sequence modeling with synthetic and real text data show u-MPS outperforming a variety of baselines.
arXiv Detail & Related papers (2020-03-02T17:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.