Online Incremental Non-Gaussian Inference for SLAM Using Normalizing
Flows
- URL: http://arxiv.org/abs/2110.00876v1
- Date: Sat, 2 Oct 2021 21:07:05 GMT
- Title: Online Incremental Non-Gaussian Inference for SLAM Using Normalizing
Flows
- Authors: Qiangqiang Huang, Can Pu, Kasra Khosoussi, David M. Rosen, Dehann
Fourie, Jonathan P. How, John J. Leonard
- Abstract summary: NF-iSAM exploits the expressive power of neural networks to model normalizing flows that can accurately approximate the joint posterior of highly nonlinear and non-Gaussian factor graphs.
We demonstrate the performance of NF-iSAM and compare it against state-of-the-art algorithms such as iSAM2 (Gaussian) and mm-iSAM (non-Gaussian) in synthetic and real range-only SLAM datasets.
- Score: 34.297172076718354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel non-Gaussian inference algorithm, Normalizing
Flow iSAM (NF-iSAM), for solving SLAM problems with non-Gaussian factors and/or
nonlinear measurement models. NF-iSAM exploits the expressive power of neural
networks to model normalizing flows that can accurately approximate the joint
posterior of highly nonlinear and non-Gaussian factor graphs. By leveraging the
Bayes tree, NF-iSAM is able to exploit the sparsity structure of SLAM, thus
enabling efficient incremental updates similar to iSAM2, although in the more
challenging non-Gaussian setting. We demonstrate the performance of NF-iSAM and
compare it against state-of-the-art algorithms such as iSAM2 (Gaussian) and
mm-iSAM (non-Gaussian) in synthetic and real range-only SLAM datasets with data
association ambiguity.
Related papers
- ALWNN Empowered Automatic Modulation Classification: Conquering Complexity and Scarce Sample Conditions [24.59462798452397]
This paper proposes an automatic modulation classification model based on the Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework (MALWNN)
The ALWNN model, by integrating the adaptive wavelet neural network and depth separable convolution, reduces the number of model parameters and computational complexity.
Experiments with MALWNN show its superior performance in few-shot learning scenarios compared to other algorithms.
arXiv Detail & Related papers (2025-03-24T06:14:33Z) - Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification [53.727688136434345]
Graph Neural Networks (GNNs) have shown superior performance in node classification.
We present Fast Graph Sharpness-Aware Minimization (FGSAM) that integrates the rapid training of Multi-Layer Perceptrons with the superior performance of GNNs.
Our proposed algorithm outperforms the standard SAM with lower computational costs in FSNC tasks.
arXiv Detail & Related papers (2024-10-22T09:33:29Z) - LoSAM: Local Search in Additive Noise Models with Unmeasured Confounders, a Top-Down Global Discovery Approach [2.4305626489408465]
We introduce local search in additive noise model (LoSAM)
LoSAM generalizes an existing nonlinear method that leverages local causal substructures to the general additive noise setting.
We show that LoSAM achieves runtime, and improves runtime and efficiency by exploiting new substructures.
arXiv Detail & Related papers (2024-10-15T16:28:55Z) - Graph Regularized NMF with L20-norm for Unsupervised Feature Learning [6.894518335015327]
Graph Regularized Non-negative Matrix Factorization (GNMF) is an extension of NMF that incorporates graph regularization constraints.
We propose an unsupervised feature learning framework based on GNMF and devise an algorithm based on PALM.
arXiv Detail & Related papers (2024-03-16T12:10:01Z) - AdvNF: Reducing Mode Collapse in Conditional Normalising Flows using Adversarial Learning [1.644043499620662]
Explicit generators, such as Normalising Flows (NFs), have been extensively applied to get unbiased samples from target distributions.
We study central problems in conditional NFs, such as high variance, mode collapse and data efficiency.
We propose adversarial training for NFs to ameliorate these problems.
arXiv Detail & Related papers (2024-01-29T08:13:51Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - PI-NLF: A Proportional-Integral Approach for Non-negative Latent Factor
Analysis [9.087387628717952]
A non-negative latent factor (NLF) model performs efficient representation learning to an HDI matrix.
A PI-NLF model outperforms the state-of-the-art models in both computational efficiency and estimation accuracy for missing data of an HDI matrix.
arXiv Detail & Related papers (2022-05-05T12:04:52Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Learning Generative Prior with Latent Space Sparsity Constraints [25.213673771175692]
It has been argued that the distribution of natural images do not lie in a single manifold but rather lie in a union of several submanifolds.
We propose a sparsity-driven latent space sampling (SDLSS) framework and develop a proximal meta-learning (PML) algorithm to enforce sparsity in the latent space.
The results demonstrate that for a higher degree of compression, the SDLSS method is more efficient than the state-of-the-art method.
arXiv Detail & Related papers (2021-05-25T14:12:04Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.