Neural Kalman Filtering
- URL: http://arxiv.org/abs/2102.10021v1
- Date: Fri, 19 Feb 2021 16:43:15 GMT
- Title: Neural Kalman Filtering
- Authors: Beren Millidge, Alexander Tschantz, Anil Seth, Christopher Buckley
- Abstract summary: We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Kalman filter is a fundamental filtering algorithm that fuses noisy
sensory data, a previous state estimate, and a dynamics model to produce a
principled estimate of the current state. It assumes, and is optimal for,
linear models and white Gaussian noise. Due to its relative simplicity and
general effectiveness, the Kalman filter is widely used in engineering
applications. Since many sensory problems the brain faces are, at their core,
filtering problems, it is possible that the brain possesses neural circuitry
that implements equivalent computations to the Kalman filter. The standard
approach to Kalman filtering requires complex matrix computations that are
unlikely to be directly implementable in neural circuits. In this paper, we
show that a gradient-descent approximation to the Kalman filter requires only
local computations with variance weighted prediction errors. Moreover, we show
that it is possible under the same scheme to adaptively learn the dynamics
model with a learning rule that corresponds directly to Hebbian plasticity. We
demonstrate the performance of our method on a simple Kalman filtering task,
and propose a neural implementation of the required equations.
Related papers
- Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.
Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.
Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - Tensor network square root Kalman filter for online Gaussian process regression [5.482420806459269]
We develop, for the first time, a tensor network square root Kalman filter, and apply it to high-dimensional online Gaussian process regression.
In our experiments, we demonstrate that our method is equivalent to the conventional Kalman filter when choosing a full-rank tensor network.
We also apply our method to a real-life system identification problem where we estimate $414$ parameters on a standard laptop.
arXiv Detail & Related papers (2024-09-05T06:38:27Z) - Machine Learning and Kalman Filtering for Nanomechanical Mass
Spectrometry [0.0]
We present enhancements and robust realizations for a Kalman filtering technique, augmented with maximum-likelihood estimation.
We describe learning techniques that are based on neural networks and boosted decision trees for temporal location and event size estimation.
arXiv Detail & Related papers (2023-06-01T11:22:04Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Multiparticle Kalman filter for object localization in symmetric
environments [69.81996031777717]
Two well-known classes of filtering algorithms to solve the localization problem are Kalman filter-based methods and particle filter-based methods.
We consider these classes, demonstrate their complementary properties, and propose a novel filtering algorithm that takes the best from two classes.
arXiv Detail & Related papers (2023-03-14T13:31:43Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.