On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices
- URL: http://arxiv.org/abs/2208.11977v1
- Date: Thu, 25 Aug 2022 10:12:53 GMT
- Title: On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices
- Authors: Teodora Popordanoska, Aleksei Tiulpin, Wacha Bounliphone and Matthew
B. Blaschko
- Abstract summary: This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
- Score: 20.20416580970697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The eigendecomposition of a matrix is the central procedure in probabilistic
models based on matrix factorization, for instance principal component analysis
and topic models. Quantifying the uncertainty of such a decomposition based on
a finite sample estimate is essential to reasoning under uncertainty when
employing such models. This paper tackles the challenge of computing confidence
bounds on the individual entries of eigenvectors of a covariance matrix of
fixed dimension. Moreover, we derive a method to bound the entries of the
inverse covariance matrix, the so-called precision matrix. The assumptions
behind our method are minimal and require that the covariance matrix exists,
and its empirical estimator converges to the true covariance. We make use of
the theory of U-statistics to bound the $L_2$ perturbation of the empirical
covariance matrix. From this result, we obtain bounds on the eigenvectors using
Weyl's theorem and the eigenvalue-eigenvector identity and we derive confidence
intervals on the entries of the precision matrix using matrix inversion
perturbation bounds. As an application of these results, we demonstrate a new
statistical test, which allows us to test for non-zero values of the precision
matrix. We compare this test to the well-known Fisher-z test for partial
correlations, and demonstrate the soundness and scalability of the proposed
statistical test, as well as its application to real-world data from medical
and physics domains.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Quantitative deterministic equivalent of sample covariance matrices with
a general dependence structure [0.0]
We prove quantitative bounds involving both the dimensions and the spectral parameter, in particular allowing it to get closer to the real positive semi-line.
As applications, we obtain a new bound for the convergence in Kolmogorov distance of the empirical spectral distributions of these general models.
arXiv Detail & Related papers (2022-11-23T15:50:31Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Riemannian statistics meets random matrix theory: towards learning from
high-dimensional covariance matrices [2.352645870795664]
This paper shows that there seems to exist no practical method of computing the normalising factors associated with Riemannian Gaussian distributions on spaces of high-dimensional covariance matrices.
It is shown that this missing method comes from an unexpected new connection with random matrix theory.
Numerical experiments are conducted which demonstrate how this new approximation can unlock the difficulties which have impeded applications to real-world datasets.
arXiv Detail & Related papers (2022-03-01T03:16:50Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Covariance Estimation for Matrix-valued Data [9.739753590548796]
We propose a class of distribution-free regularized covariance estimation methods for high-dimensional matrix data.
We formulate a unified framework for estimating bandable covariance, and introduce an efficient algorithm based on rank one unconstrained Kronecker product approximation.
We demonstrate the superior finite-sample performance of our methods using simulations and real applications from a gridded temperature anomalies dataset and a S&P 500 stock data analysis.
arXiv Detail & Related papers (2020-04-11T02:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.