Ledoit-Wolf linear shrinkage with unknown mean
- URL: http://arxiv.org/abs/2304.07045v1
- Date: Fri, 14 Apr 2023 10:40:30 GMT
- Title: Ledoit-Wolf linear shrinkage with unknown mean
- Authors: Benoit Oriol and Alexandre Miot
- Abstract summary: This work addresses large dimensional covariance matrix estimation with unknown mean.
The empirical quadratic covariance estimator fails when dimension and number of samples are proportional and tend to infinity.
We propose a new estimator and prove its convergence under the Ledoit and Wolf assumptions.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses large dimensional covariance matrix estimation with
unknown mean. The empirical covariance estimator fails when dimension and
number of samples are proportional and tend to infinity, settings known as
Kolmogorov asymptotics. When the mean is known, Ledoit and Wolf (2004) proposed
a linear shrinkage estimator and proved its convergence under those
asymptotics. To the best of our knowledge, no formal proof has been proposed
when the mean is unknown. To address this issue, we propose a new estimator and
prove its quadratic convergence under the Ledoit and Wolf assumptions. Finally,
we show empirically that it outperforms other standard estimators.
Related papers
- Comparison of estimation limits for quantum two-parameter estimation [1.8507567676996612]
We compare the attainability of the Nagaoka Cram'er--Rao bound and the Lu--Wang uncertainty relation.
We show that these two limits can provide different information about the physically attainable precision.
arXiv Detail & Related papers (2024-07-17T10:37:08Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A Geometric Unification of Distributionally Robust Covariance Estimators: Shrinking the Spectrum by Inflating the Ambiguity Set [20.166217494056916]
We propose a principled approach to construct covariance estimators without imposing restrictive assumptions.
We show that our robust estimators are efficiently computable and consistent.
Numerical experiments based on synthetic and real data show that our robust estimators are competitive with state-of-the-art estimators.
arXiv Detail & Related papers (2024-05-30T15:01:18Z) - Existence of unbiased resilient estimators in discrete quantum systems [0.0]
Bhattacharyya bounds offer a more robust estimation framework with respect to prior accuracy.
We show that when the number of constraints exceeds the number of measurement outcomes, an estimator with finite variance typically does not exist.
arXiv Detail & Related papers (2024-02-23T10:12:35Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Statistical Barriers to Affine-equivariant Estimation [10.077727846124633]
We investigate the quantitative performance of affine-equivariant estimators for robust mean estimation.
We find that classical estimators are either quantitatively sub-optimal or lack any quantitative guarantees.
We construct a new affine-equivariant estimator which nearly matches our lower bound.
arXiv Detail & Related papers (2023-10-16T18:42:00Z) - An Intrinsic Approach to Scalar-Curvature Estimation for Point Clouds [3.2634122554914]
We introduce an intrinsic estimator for the scalar curvature of a data set presented as a finite metric space.
Our estimator depends only on the metric structure of the data and not on an embedding in $mathbbRn$.
arXiv Detail & Related papers (2023-08-04T14:29:50Z) - Bayesian Metric Learning for Uncertainty Quantification in Image
Retrieval [0.7646713951724012]
We propose the first Bayesian encoder for metric learning.
We learn a distribution over the network weights with the Laplace Approximation.
We show that our Laplacian Metric Learner (LAM) estimates well-calibrated uncertainties, reliably detects out-of-distribution examples, and yields state-of-the-art predictive performance.
arXiv Detail & Related papers (2023-02-02T18:59:23Z) - On Variance Estimation of Random Forests [0.0]
This paper develops an unbiased variance estimator based on incomplete U-statistics.
We show that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs.
arXiv Detail & Related papers (2022-02-18T03:35:47Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Suboptimality of Constrained Least Squares and Improvements via
Non-Linear Predictors [3.5788754401889014]
We study the problem of predicting as well as the best linear predictor in a bounded Euclidean ball with respect to the squared loss.
We discuss additional distributional assumptions sufficient to guarantee an $O(d/n)$ excess risk rate for the least squares estimator.
arXiv Detail & Related papers (2020-09-19T21:39:46Z) - Nonparametric Estimation of the Fisher Information and Its Applications [82.00720226775964]
This paper considers the problem of estimation of the Fisher information for location from a random sample of size $n$.
An estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
A new estimator, termed a clipped estimator, is proposed.
arXiv Detail & Related papers (2020-05-07T17:21:56Z) - Estimating Gradients for Discrete Random Variables by Sampling without
Replacement [93.09326095997336]
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement.
We show that our estimator can be derived as the Rao-Blackwellization of three different estimators.
arXiv Detail & Related papers (2020-02-14T14:15:18Z) - Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence [7.011897575776511]
We show that the estimator is robust to both dependence and to the presence of outliers in the dataset.
We provide a theoretical study of the gradient descent algorithm used to compute the estimator.
arXiv Detail & Related papers (2019-12-12T02:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.