Kalman Linear Attention: Parallel Bayesian Filtering For Efficient Language Modelling and State Tracking
- URL: http://arxiv.org/abs/2602.10743v1
- Date: Wed, 11 Feb 2026 11:11:45 GMT
- Title: Kalman Linear Attention: Parallel Bayesian Filtering For Efficient Language Modelling and State Tracking
- Authors: Vaisakh Shaj, Cameron Barker, Aidan Scannell, Andras Szecsenyi, Elliot J. Crowley, Amos Storkey,
- Abstract summary: State-space language models such as Mamba and gated linear attention (GLA) offer efficient alternatives to transformers.<n>We address these limitations by reframing sequence modelling through a probabilistic lens.<n>We introduce the Kalman Linear Attention (KLA) layer, a neural sequence-modelling primitive that performs time-parallel probabilistic inference.
- Score: 7.437238821092346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-space language models such as Mamba and gated linear attention (GLA) offer efficient alternatives to transformers due to their linear complexity and parallel training, but often lack the expressivity and robust state-tracking needed for complex reasoning. We address these limitations by reframing sequence modelling through a probabilistic lens, using Bayesian filters as a core primitive. While classical filters such as Kalman filters provide principled state estimation and uncertainty tracking, they are typically viewed as inherently sequential. We show that reparameterising the Kalman filter in information form enables its updates to be computed via an associative scan, allowing efficient parallel training. Building on this insight, we introduce the Kalman Linear Attention (KLA) layer, a neural sequence-modelling primitive that performs time-parallel probabilistic inference while maintaining explicit belief-state uncertainty. KLA offers strictly more expressive nonlinear updates and gating than GLA variants while retaining their computational advantages. On language modelling tasks, KLA matches or outperforms modern SSMs and GLAs across representative discrete token-manipulation and state-tracking benchmarks.
Related papers
- Dissecting Linear Recurrent Models: How Different Gating Strategies Drive Selectivity and Generalization [5.057995083193427]
Linear recurrent neural networks have emerged as efficient alternatives to the original Transformer's softmax attention mechanism.<n>Existing benchmark tasks are either too simplistic to reveal substantial differences or excessively resource-intensive for experimentation.<n>We introduce SelectivBench, a set of lightweight and customizable synthetic benchmark tasks for systematically evaluating sequence models.
arXiv Detail & Related papers (2026-01-18T21:49:21Z) - Higher-order Linear Attention [59.92962330635185]
quadratic cost of scaled dot-product attention is a central obstacle to scaling autoregressive language models to long contexts.<n>We introduce Higher-order Linear Attention (HLA), a causal, streaming mechanism that realizes higher interactions via compact prefix sufficient statistics.
arXiv Detail & Related papers (2025-10-31T07:54:37Z) - Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Transformers as Implicit State Estimators: In-Context Learning in Dynamical Systems [18.634960596074027]
We show that transformers can implicitly infer hidden states in order to predict the outputs of a wide family of dynamical systems.<n>Findings suggest that transformer in-context learning provides a flexible, non-parametric alternative for output prediction in dynamical systems.
arXiv Detail & Related papers (2024-10-21T22:18:10Z) - Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.<n>Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.<n> Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Tensor Network Kalman Filtering for Large-Scale LS-SVMs [17.36231167296782]
Least squares support vector machines are used for nonlinear regression and classification.
A framework based on tensor networks and the Kalman filter is presented to alleviate the demanding memory and computational complexities.
Results show that our method can achieve high performance and is particularly useful when alternative methods are computationally infeasible.
arXiv Detail & Related papers (2021-10-26T08:54:03Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Neural Kalman Filtering [62.997667081978825]
We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
arXiv Detail & Related papers (2021-02-19T16:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.