Error-Covariance Analysis of Monocular Pose Estimation Using Total Least
Squares
- URL: http://arxiv.org/abs/2210.12157v1
- Date: Fri, 21 Oct 2022 01:46:18 GMT
- Title: Error-Covariance Analysis of Monocular Pose Estimation Using Total Least
Squares
- Authors: Saeed Maleki, John Crassidis, Yang Cheng, Matthias Schmid
- Abstract summary: This study presents a theoretical structure for the monocular pose estimation problem using the total least squares.
observations of the features are extracted from the monocular camera images.
The attitude and position solutions are proven to reach the Cram'er-Rao lower bound.
- Score: 5.710183643449906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a theoretical structure for the monocular pose estimation
problem using the total least squares. The unit-vector line-of-sight
observations of the features are extracted from the monocular camera images.
First, the optimization framework is formulated for the pose estimation problem
with observation vectors extracted from unit vectors from the camera
center-of-projection, pointing towards the image features. The attitude and
position solutions obtained via the derived optimization framework are proven
to reach the Cram\'er-Rao lower bound under the small angle approximation of
the attitude errors. Specifically, The Fisher Information Matrix and the
Cram\'er-Rao bounds are evaluated and compared to the analytical derivations of
the error-covariance expressions to rigorously prove the optimality of the
estimates. The sensor data for the measurement model is provided through a
series of vector observations, and two fully populated noise-covariance
matrices are assumed for the body and reference observation data. The inverse
of the former matrices appear in terms of a series of weight matrices in the
cost function. The proposed solution is simulated in a Monte-Carlo framework
with 10,000 samples to validate the error-covariance analysis.
Related papers
- Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing [28.91482208876914]
We consider the problem of parameter estimation in a high-dimensional generalized linear model.
Despite their wide use, a rigorous performance characterization, as well as a principled way to preprocess the data, are available only for unstructured designs.
arXiv Detail & Related papers (2023-08-28T11:49:23Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Adaptive Estimation of Graphical Models under Total Positivity [13.47131471222723]
We consider the problem of estimating (diagonally dominant) M-matrices as precision matrices in Gaussian graphical models.
We propose an adaptive multiple-stage estimation method that refines the estimate.
We develop a unified framework based on the gradient projection method to solve the regularized problem.
arXiv Detail & Related papers (2022-10-27T14:21:27Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Asymptotic Errors for Teacher-Student Convex Generalized Linear Models
(or : How to Prove Kabashima's Replica Formula) [23.15629681360836]
We prove an analytical formula for the reconstruction performance of convex generalized linear models.
We show that an analytical continuation may be carried out to extend the result to convex (non-strongly) problems.
We illustrate our claim with numerical examples on mainstream learning methods.
arXiv Detail & Related papers (2020-06-11T16:26:35Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - An Optimal Statistical and Computational Framework for Generalized
Tensor Estimation [10.899518267165666]
This paper describes a flexible framework for low-rank tensor estimation problems.
It includes many important instances from applications in computational imaging, genomics, and network analysis.
arXiv Detail & Related papers (2020-02-26T01:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.