Neural Kalman Filtering
- URL: http://arxiv.org/abs/2102.10021v1
- Date: Fri, 19 Feb 2021 16:43:15 GMT
- Title: Neural Kalman Filtering
- Authors: Beren Millidge, Alexander Tschantz, Anil Seth, Christopher Buckley
- Abstract summary: We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Kalman filter is a fundamental filtering algorithm that fuses noisy
sensory data, a previous state estimate, and a dynamics model to produce a
principled estimate of the current state. It assumes, and is optimal for,
linear models and white Gaussian noise. Due to its relative simplicity and
general effectiveness, the Kalman filter is widely used in engineering
applications. Since many sensory problems the brain faces are, at their core,
filtering problems, it is possible that the brain possesses neural circuitry
that implements equivalent computations to the Kalman filter. The standard
approach to Kalman filtering requires complex matrix computations that are
unlikely to be directly implementable in neural circuits. In this paper, we
show that a gradient-descent approximation to the Kalman filter requires only
local computations with variance weighted prediction errors. Moreover, we show
that it is possible under the same scheme to adaptively learn the dynamics
model with a learning rule that corresponds directly to Hebbian plasticity. We
demonstrate the performance of our method on a simple Kalman filtering task,
and propose a neural implementation of the required equations.
Related papers
- Machine Learning and Kalman Filtering for Nanomechanical Mass
Spectrometry [0.0]
We present enhancements and robust realizations for a Kalman filtering technique, augmented with maximum-likelihood estimation.
We describe learning techniques that are based on neural networks and boosted decision trees for temporal location and event size estimation.
arXiv Detail & Related papers (2023-06-01T11:22:04Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Multiparticle Kalman filter for object localization in symmetric
environments [69.81996031777717]
Two well-known classes of filtering algorithms to solve the localization problem are Kalman filter-based methods and particle filter-based methods.
We consider these classes, demonstrate their complementary properties, and propose a novel filtering algorithm that takes the best from two classes.
arXiv Detail & Related papers (2023-03-14T13:31:43Z) - A Hybrid Model and Learning-Based Adaptive Navigation Filter [0.0]
We propose a hybrid model and learning-based adaptive navigation filter.
We show that the proposed method obtained an improvement of 25% in the position error.
arXiv Detail & Related papers (2022-06-14T17:10:47Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - Unsupervised Learned Kalman Filtering [84.18625250574853]
unsupervised adaptation is achieved by exploiting the hybrid model-based/data-driven architecture of KalmanNet.
We numerically demonstrate that when the noise statistics are unknown, unsupervised KalmanNet achieves a similar performance to KalmanNet with supervised learning.
arXiv Detail & Related papers (2021-10-18T04:04:09Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.