Multi-kernel Correntropy-based Orientation Estimation of IMUs: Gradient
Descent Methods
- URL: http://arxiv.org/abs/2304.06548v2
- Date: Wed, 11 Oct 2023 15:09:11 GMT
- Title: Multi-kernel Correntropy-based Orientation Estimation of IMUs: Gradient
Descent Methods
- Authors: Shilei Li, Lijing Li, Dawei Shi, Yunjiang Lou, Ling Shi
- Abstract summary: Correntropy-based descent gradient (CGD) and correntropy-based decoupled orientation estimation (CDOE)
Traditional methods rely on the mean squared error (MSE) criterion, making them vulnerable to external acceleration and magnetic interference.
New algorithms demonstrate significantly lower computational complexity than Kalman filter-based approaches.
- Score: 3.8286082196845466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents two computationally efficient algorithms for the
orientation estimation of inertial measurement units (IMUs): the
correntropy-based gradient descent (CGD) and the correntropy-based decoupled
orientation estimation (CDOE). Traditional methods, such as gradient descent
(GD) and decoupled orientation estimation (DOE), rely on the mean squared error
(MSE) criterion, making them vulnerable to external acceleration and magnetic
interference. To address this issue, we demonstrate that the multi-kernel
correntropy loss (MKCL) is an optimal objective function for maximum likelihood
estimation (MLE) when the noise follows a type of heavy-tailed distribution. In
certain situations, the estimation error of the MKCL is bounded even in the
presence of arbitrarily large outliers. By replacing the standard MSE cost
function with MKCL, we develop the CGD and CDOE algorithms. We evaluate the
effectiveness of our proposed methods by comparing them with existing
algorithms in various situations. Experimental results indicate that our
proposed methods (CGD and CDOE) outperform their conventional counterparts (GD
and DOE), especially when faced with external acceleration and magnetic
disturbances. Furthermore, the new algorithms demonstrate significantly lower
computational complexity than Kalman filter-based approaches, making them
suitable for applications with low-cost microprocessors.
Related papers
- Downlink MIMO Channel Estimation from Bits: Recoverability and Algorithm [47.7091447096969]
A major challenge lies in acquiring the downlink channel state information (CSI) at the base station (BS) from limited feedback sent by the user equipment (UE)
In this paper, a simple feedback framework is proposed, where a compression and Gaussian dithering-based quantization strategy is adopted at the UE side, and then a maximum likelihood estimator (MLE) is formulated at the BS side.
The algorithm is carefully designed to integrate a sophisticated harmonic retrieval (HR) solver as subroutine, which turns out to be the key of effectively tackling this hard MLE problem.
arXiv Detail & Related papers (2024-11-25T02:15:01Z) - Computing Low-Entropy Couplings for Large-Support Distributions [53.00113867130712]
Minimum-entropy coupling has applications in areas such as causality and steganography.
Existing algorithms are either computationally intractable for large-support distributions or limited to specific distribution types.
This work addresses these limitations by unifying a prior family of iterative MEC approaches into a generalized partition-based formalism.
arXiv Detail & Related papers (2024-05-29T21:54:51Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Low-Discrepancy Points via Energetic Variational Inference [5.936959130012709]
We propose a deterministic variational inference approach and generate low-discrepancy points by minimizing the kernel discrepancy.
We name the resulting algorithm EVI-MMD and demonstrate it through examples in which the target distribution is fully specified.
Its performances are satisfactory compared to alternative methods in the applications of distribution approximation, numerical integration, and generative learning.
arXiv Detail & Related papers (2021-11-21T03:09:07Z) - Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with
Weak Mixing Time Bounds [7.428782604099876]
A major obstacle to practical applications of Gibbs distributions is the need to estimate their partition functions.
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions.
arXiv Detail & Related papers (2021-11-14T15:42:02Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Stochastic Gradient MCMC with Multi-Armed Bandit Tuning [2.2559617939136505]
We propose a novel bandit-based algorithm that tunes SGMCMC hyperparameters to maximize the accuracy of the posterior approximation.
We support our results with experiments on both simulated and real datasets, and find that this method is practical for a wide range of application areas.
arXiv Detail & Related papers (2021-05-27T11:00:31Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - A Robust Matching Pursuit Algorithm Using Information Theoretic Learning [37.968665739578185]
A new OMP algorithm is developed based on the information theoretic learning (ITL)
The experimental results on both simulated and real-world data consistently demonstrate the superiority of the proposed OMP algorithm in data recovery, image reconstruction, and classification.
arXiv Detail & Related papers (2020-05-10T01:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.