Ledoit-Wolf linear shrinkage with unknown mean
- URL: http://arxiv.org/abs/2304.07045v1
- Date: Fri, 14 Apr 2023 10:40:30 GMT
- Title: Ledoit-Wolf linear shrinkage with unknown mean
- Authors: Benoit Oriol and Alexandre Miot
- Abstract summary: This work addresses large dimensional covariance matrix estimation with unknown mean.
The empirical quadratic covariance estimator fails when dimension and number of samples are proportional and tend to infinity.
We propose a new estimator and prove its convergence under the Ledoit and Wolf assumptions.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses large dimensional covariance matrix estimation with
unknown mean. The empirical covariance estimator fails when dimension and
number of samples are proportional and tend to infinity, settings known as
Kolmogorov asymptotics. When the mean is known, Ledoit and Wolf (2004) proposed
a linear shrinkage estimator and proved its convergence under those
asymptotics. To the best of our knowledge, no formal proof has been proposed
when the mean is unknown. To address this issue, we propose a new estimator and
prove its quadratic convergence under the Ledoit and Wolf assumptions. Finally,
we show empirically that it outperforms other standard estimators.
Related papers
- Comparison of estimation limits for quantum two-parameter estimation [1.8507567676996612]
We compare the attainability of the Nagaoka Cram'er--Rao bound and the Lu--Wang uncertainty relation.
We show that these two limits can provide different information about the physically attainable precision.
arXiv Detail & Related papers (2024-07-17T10:37:08Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Statistical Barriers to Affine-equivariant Estimation [10.077727846124633]
We investigate the quantitative performance of affine-equivariant estimators for robust mean estimation.
We find that classical estimators are either quantitatively sub-optimal or lack any quantitative guarantees.
We construct a new affine-equivariant estimator which nearly matches our lower bound.
arXiv Detail & Related papers (2023-10-16T18:42:00Z) - Bayesian Metric Learning for Uncertainty Quantification in Image
Retrieval [0.7646713951724012]
We propose the first Bayesian encoder for metric learning.
We learn a distribution over the network weights with the Laplace Approximation.
We show that our Laplacian Metric Learner (LAM) estimates well-calibrated uncertainties, reliably detects out-of-distribution examples, and yields state-of-the-art predictive performance.
arXiv Detail & Related papers (2023-02-02T18:59:23Z) - On Variance Estimation of Random Forests [0.0]
This paper develops an unbiased variance estimator based on incomplete U-statistics.
We show that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs.
arXiv Detail & Related papers (2022-02-18T03:35:47Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Suboptimality of Constrained Least Squares and Improvements via
Non-Linear Predictors [3.5788754401889014]
We study the problem of predicting as well as the best linear predictor in a bounded Euclidean ball with respect to the squared loss.
We discuss additional distributional assumptions sufficient to guarantee an $O(d/n)$ excess risk rate for the least squares estimator.
arXiv Detail & Related papers (2020-09-19T21:39:46Z) - Nonparametric Estimation of the Fisher Information and Its Applications [82.00720226775964]
This paper considers the problem of estimation of the Fisher information for location from a random sample of size $n$.
An estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
A new estimator, termed a clipped estimator, is proposed.
arXiv Detail & Related papers (2020-05-07T17:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.