Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation
- URL: http://arxiv.org/abs/2311.04748v3
- Date: Mon, 9 Sep 2024 09:32:15 GMT
- Title: Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation
- Authors: Florent Bouchard, Alexandre Renaux, Guillaume Ginolhac, Arnaud Breloy,
- Abstract summary: This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
- Score: 49.67011673289242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a Riemannian manifold (a smooth manifold endowed with a Riemannian metric) and follows a given prior distribution. In this setup, the chosen Riemannian metric induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure. Performance bound for such error measure were previously obtained in the non-Bayesian case (when the unknown parameter is assumed to deterministic), and referred to as \textit{intrinsic} Cram\'er-Rao bound. The presented result then appears either as: \textit{a}) an extension of the intrinsic Cram\'er-Rao bound to the Bayesian estimation framework; \textit{b}) a generalization of the Van-Trees inequality (Bayesian Cram\'er-Rao bound) that accounts for the aforementioned geometric structures. In a second part, we leverage this formalism to study the problem of covariance matrix estimation when the data follow a Gaussian distribution, and whose covariance matrix is drawn from an inverse Wishart distribution. Performance bounds for this problem are obtained for both the mean squared error (Euclidean metric) and the natural Riemannian distance for Hermitian positive definite matrices (affine invariant metric). Numerical simulation illustrate that assessing the error with the affine invariant metric is revealing of interesting properties of the maximum a posteriori and minimum mean square error estimator, which are not observed when using the Euclidean metric.
Related papers
- Consistent Estimation of a Class of Distances Between Covariance Matrices [7.291687946822539]
We are interested in the family of distances that can be expressed as sums of traces of functions that are separately applied to each covariance matrix.
A statistical analysis of the behavior of this class of distance estimators has also been conducted.
We present a central limit theorem that establishes the Gaussianity of these estimators and provides closed form expressions for the corresponding means and variances.
arXiv Detail & Related papers (2024-09-18T07:36:25Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A Geometric Unification of Distributionally Robust Covariance Estimators: Shrinking the Spectrum by Inflating the Ambiguity Set [20.166217494056916]
We propose a principled approach to construct covariance estimators without imposing restrictive assumptions.
We show that our robust estimators are efficiently computable and consistent.
Numerical experiments based on synthetic and real data show that our robust estimators are competitive with state-of-the-art estimators.
arXiv Detail & Related papers (2024-05-30T15:01:18Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Non asymptotic estimation lower bounds for LTI state space models with
Cram\'er-Rao and van Trees [1.14219428942199]
We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance.
We provide non lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator.
Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk.
arXiv Detail & Related papers (2021-09-17T15:00:25Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - Generalized Bayesian Cram\'{e}r-Rao Inequality via Information Geometry
of Relative $\alpha$-Entropy [17.746238062801293]
relative $alpha$-entropy is the R'enyi analog of relative entropy.
Recent information geometric investigations on this quantity have enabled the generalization of the Cram'er-Rao inequality.
We show that in the limiting case when the entropy order approaches unity, this framework reduces to the conventional Bayesian Cram'er-Rao inequality.
arXiv Detail & Related papers (2020-02-11T23:38:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.