LatentTrack: Sequential Weight Generation via Latent Filtering
- URL: http://arxiv.org/abs/2602.00458v1
- Date: Sat, 31 Jan 2026 02:22:59 GMT
- Title: LatentTrack: Sequential Weight Generation via Latent Filtering
- Authors: Omer Haq,
- Abstract summary: LatentTrack (LT) is a sequential neural architecture for online probabilistic prediction under nonstationary dynamics.<n> LT performs causal Bayesian filtering in a low-dimensional latent space and uses a lightweight hypernetwork to generate predictive model parameters at each time step.<n> LT consistently calibrated lower negative log-likelihood and mean squared error than stateful sequential and static uncertainty-aware baselines.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce LatentTrack (LT), a sequential neural architecture for online probabilistic prediction under nonstationary dynamics. LT performs causal Bayesian filtering in a low-dimensional latent space and uses a lightweight hypernetwork to generate predictive model parameters at each time step, enabling constant-time online adaptation without per-step gradient updates. At each time step, a learned latent model predicts the next latent distribution, which is updated via amortized inference using new observations, yielding a predict--generate--update filtering framework in function space. The formulation supports both structured (Markovian) and unstructured latent dynamics within a unified objective, while Monte Carlo inference over latent trajectories produces calibrated predictive mixtures with fixed per-step cost. Evaluated on long-horizon online regression using the Jena Climate benchmark, LT consistently achieves lower negative log-likelihood and mean squared error than stateful sequential and static uncertainty-aware baselines, with competitive calibration, demonstrating that latent-conditioned function evolution is an effective alternative to traditional latent-state modeling under distribution shift.
Related papers
- Efficient Real-Time Adaptation of ROMs for Unsteady Flows Using Data Assimilation [7.958594167693376]
We propose an efficient retraining strategy for a parameterized Reduced Order Model (ROM)<n>The strategy attains accuracy comparable to full retraining while requiring only a fraction of the computational time.<n>We show that, for the dynamical system considered, the dominant source of error in out-of-sample forecasts stems from distortions of the latent manifold.
arXiv Detail & Related papers (2026-02-26T16:43:28Z) - Unifying Model-Free Efficiency and Model-Based Representations via Latent Dynamics [6.208369829942616]
We present Unified Latent Dynamics (ULD), a novel reinforcement learning algorithm.<n>ULD unifies the efficiency of model-free methods with the representational strengths of model-based approaches.<n> evaluated on 80 environments spanning Gym locomotion, DeepMind Control (proprioceptive and visual), and Atari.
arXiv Detail & Related papers (2026-02-13T06:06:56Z) - Adaptive Benign Overfitting (ABO): Overparameterized RLS for Online Learning in Non-stationary Time-series [0.0]
ABO is highly accurate (comparable to baseline kernel methods) while achieving speed improvements of between 20 and 40 percent.<n>Results provide a unified view linking adaptive filtering, kernel approximation, and benign overfitting within a stable online learning framework.
arXiv Detail & Related papers (2026-01-29T15:58:01Z) - Conformal Online Learning of Deep Koopman Linear Embeddings [1.8577594866206437]
COLoKe is a framework for adaptively updating Koopman-invariant representations from streaming data.<n> COLoKe employs a conformal-style mechanism that shifts the focus from evaluating the conformity of new states to assessing the consistency of the current Koopman model.
arXiv Detail & Related papers (2025-11-16T20:08:48Z) - Adaptive Conformal Prediction Intervals Over Trajectory Ensembles [50.31074512684758]
Future trajectories play an important role across domains such as autonomous driving, hurricane forecasting, and epidemic modeling.<n>We propose a unified framework based on conformal prediction that transforms sampled trajectories into calibrated prediction intervals with theoretical coverage guarantees.
arXiv Detail & Related papers (2025-08-18T21:14:07Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Nonlinear Assimilation via Score-based Sequential Langevin Sampling [5.107329143106734]
This paper presents score-based sequential Langevin sampling (SSLS)<n>The proposed method decomposes the assimilation process into alternating prediction and update steps.<n>We provide theoretical guarantees for SSLS convergence in total variation (TV) distance under certain conditions.
arXiv Detail & Related papers (2024-11-20T16:31:46Z) - Oscillatory State-Space Models [61.923849241099184]
We propose Lineary State-Space models (LinOSS) for efficiently learning on long sequences.<n>A stable discretization, integrated over time using fast associative parallel scans, yields the proposed state-space model.<n>We show that LinOSS is universal, i.e., it can approximate any continuous and causal operator mapping between time-varying functions.
arXiv Detail & Related papers (2024-10-04T22:00:13Z) - A Poisson-Gamma Dynamic Factor Model with Time-Varying Transition Dynamics [51.147876395589925]
A non-stationary PGDS is proposed to allow the underlying transition matrices to evolve over time.
A fully-conjugate and efficient Gibbs sampler is developed to perform posterior simulation.
Experiments show that, in comparison with related models, the proposed non-stationary PGDS achieves improved predictive performance.
arXiv Detail & Related papers (2024-02-26T04:39:01Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Online Variational Filtering and Parameter Learning [26.79116194327116]
We present a variational method for online state estimation and parameter learning in state-space models (SSMs)
We use gradients to simultaneously optimize a lower bound on the log evidence with respect to both model parameters and a variational approximation of the states' posterior distribution.
Unlike existing approaches, our method is able to operate in an entirely online manner, such that historic observations do not require revisitation after being incorporated and the cost of updates at each time step remains constant.
arXiv Detail & Related papers (2021-10-26T10:25:04Z) - Autoregressive Dynamics Models for Offline Policy Evaluation and
Optimization [60.73540999409032]
We show that expressive autoregressive dynamics models generate different dimensions of the next state and reward sequentially conditioned on previous dimensions.
We also show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer.
arXiv Detail & Related papers (2021-04-28T16:48:44Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.