Split-KalmanNet: A Robust Model-Based Deep Learning Approach for SLAM
- URL: http://arxiv.org/abs/2210.09636v1
- Date: Tue, 18 Oct 2022 07:10:38 GMT
- Title: Split-KalmanNet: A Robust Model-Based Deep Learning Approach for SLAM
- Authors: Geon Choi, Jeonghun Park, Nir Shlezinger, Yonina C. Eldar, Namyoon Lee
- Abstract summary: Simultaneous localization and mapping (SLAM) is a method that constructs a map of an unknown environment and localizes the position of a moving agent on the map simultaneously.
Extended Kalman filter (EKF) has been widely adopted as a low complexity solution for online SLAM.
We present a robust EKF algorithm using the power of deep learning for online SLAM, referred to as Split-KalmanNet.
- Score: 101.32324781612172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneous localization and mapping (SLAM) is a method that constructs a
map of an unknown environment and localizes the position of a moving agent on
the map simultaneously. Extended Kalman filter (EKF) has been widely adopted as
a low complexity solution for online SLAM, which relies on a motion and
measurement model of the moving agent. In practice, however, acquiring precise
information about these models is very challenging, and the model mismatch
effect causes severe performance loss in SLAM. In this paper, inspired by the
recently proposed KalmanNet, we present a robust EKF algorithm using the power
of deep learning for online SLAM, referred to as Split-KalmanNet. The key idea
of Split-KalmanNet is to compute the Kalman gain using the Jacobian matrix of a
measurement function and two recurrent neural networks (RNNs). The two RNNs
independently learn the covariance matrices for a prior state estimate and the
innovation from data. The proposed split structure in the computation of the
Kalman gain allows to compensate for state and measurement model mismatch
effects independently. Numerical simulation results verify that Split-KalmanNet
outperforms the traditional EKF and the state-of-the-art KalmanNet algorithm in
various model mismatch scenarios.
Related papers
- Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.
Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.
Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)
The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - Unsupervised Learned Kalman Filtering [84.18625250574853]
unsupervised adaptation is achieved by exploiting the hybrid model-based/data-driven architecture of KalmanNet.
We numerically demonstrate that when the noise statistics are unknown, unsupervised KalmanNet achieves a similar performance to KalmanNet with supervised learning.
arXiv Detail & Related papers (2021-10-18T04:04:09Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Accurate and efficient Simulation of very high-dimensional Neural Mass
Models with distributed-delay Connectome Tensors [0.23453441553817037]
This paper introduces methods that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components.
The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass.
The second is the highly sparse three-dimensional Connectome (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection.
arXiv Detail & Related papers (2020-09-16T05:55:17Z) - PointNetKL: Deep Inference for GICP Covariance Estimation in Bathymetric
SLAM [2.064612766965483]
We propose a new approach to estimate the uncertainty of point cloud registration using PointNet.
We train this network using the KL divergence between the learned uncertainty distribution and one computed by the Monte Carlo method as the loss.
We test the performance of the general model presented applying it to our target use-case of SLAM with an autonomous underwater vehicle.
arXiv Detail & Related papers (2020-03-24T15:44:07Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.