VSE: Variational state estimation of complex model-free process
- URL: http://arxiv.org/abs/2601.21887v1
- Date: Thu, 29 Jan 2026 15:47:28 GMT
- Title: VSE: Variational state estimation of complex model-free process
- Authors: Gustav Norén, Anubhab Ghosh, Fredrik Cumlin, Saikat Chatterjee,
- Abstract summary: We present a variational state estimation (VSE) method that provides a closed-form Gaussian posterior of an underlying complex dynamical process from (noisy) nonlinear measurements.<n>The VSE is shown to be competitive against a particle filter that knows the Lorenz system model and a recently proposed data-driven state estimation method that does not know the Lorenz system model.
- Score: 10.460885341690664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We design a variational state estimation (VSE) method that provides a closed-form Gaussian posterior of an underlying complex dynamical process from (noisy) nonlinear measurements. The complex process is model-free. That is, we do not have a suitable physics-based model characterizing the temporal evolution of the process state. The closed-form Gaussian posterior is provided by a recurrent neural network (RNN). The use of RNN is computationally simple in the inference phase. For learning the RNN, an additional RNN is used in the learning phase. Both RNNs help each other learn better based on variational inference principles. The VSE is demonstrated for a tracking application - state estimation of a stochastic Lorenz system (a benchmark process) using a 2-D camera measurement model. The VSE is shown to be competitive against a particle filter that knows the Lorenz system model and a recently proposed data-driven state estimation method that does not know the Lorenz system model.
Related papers
- Assumed Density Filtering and Smoothing with Neural Network Surrogate Models [0.0]
We show that cross entropy is a more appropriate performance metric than RMSE for evaluating the accuracy of filter and smoothers.<n>We demonstrate the superiority of our method for state estimation on a Lorenz system and a Wiener system, and find that our method enables more optimal linear regulation when the state estimate is used for feedback.
arXiv Detail & Related papers (2025-11-12T06:08:53Z) - pDANSE: Particle-based Data-driven Nonlinear State Estimation from Nonlinear Measurements [55.95348868409957]
We consider the problem of designing a data-driven nonlinear state estimation (DANSE) method that uses (noisy) nonlinear measurements.<n>A recurrent neural network (RNN) provides parameters of a Gaussian prior that characterize the state of the model-free process.<n>The second-order statistics of the state posterior are computed using the nonlinear measurements observed at the time point.
arXiv Detail & Related papers (2025-10-31T14:26:48Z) - Bayesian Modeling and Estimation of Linear Time-Variant Systems using Neural Networks and Gaussian Processes [0.0]
This work introduces a unified Bayesian framework that models the system's impulse response, $h(t, tau)$, as a process.<n>We decompose the response into a posterior mean and a random fluctuation term, which naturally defines a new, useful system class we term Linear Time-Invariant in Expectation (LTIE)<n>We demonstrate through a series of experiments that our framework can robustly infer the properties of an LTI system from a single noisy observation.
arXiv Detail & Related papers (2025-07-17T07:55:34Z) - Convolutional Conditional Neural Processes [6.532867867011488]
This thesis advances neural processes in three ways.
ConvNPs improve data efficiency by building in a symmetry called translationvariance.
GNPs directly parametrise dependencies in the predictions of a neural process.
AR CNPs train a neural process without any modifications to the model or training procedure and, at test time, roll out the model in an autoregressive fashion.
arXiv Detail & Related papers (2024-08-18T19:53:38Z) - Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.