Inference on eigenvectors of non-symmetric matrices
- URL: http://arxiv.org/abs/2303.18233v2
- Date: Tue, 4 Apr 2023 12:50:58 GMT
- Title: Inference on eigenvectors of non-symmetric matrices
- Authors: Jerome R. Simons
- Abstract summary: This paper argues that the symmetrisability condition in Tyler (1981) is not necessary to establish inference procedures for eigenvectors.
We establish distribution theory for a Wald and t-test for full-vector and individual hypotheses, respectively.
As an application, we define confidence sets for Bonacich centralities estimated from adjacency matrices induced by directed graphs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper argues that the symmetrisability condition in Tyler (1981) is not
necessary to establish asymptotic inference procedures for eigenvectors. We
establish distribution theory for a Wald and t-test for full-vector and
individual coefficient hypotheses, respectively. Our test statistics originate
from eigenprojections of non-symmetric matrices. Representing projections as a
mapping from the underlying matrix to its spectral data, we find derivatives
through analytic perturbation theory. These results demonstrate how the
analytic perturbation theory of Sun (1991) is a useful tool in multivariate
statistics and are of independent interest. As an application, we define
confidence sets for Bonacich centralities estimated from adjacency matrices
induced by directed graphs.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices [20.20416580970697]
This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
arXiv Detail & Related papers (2022-08-25T10:12:53Z) - Learning Linear Symmetries in Data Using Moment Matching [0.0]
We consider the unsupervised and semi-supervised problems of learning such symmetries in a distribution directly from data.
We show that in the worst case this problem is as difficult as the graph automorphism problem.
We develop and compare theoretically and empirically the effectiveness of different methods of selecting which eigenvectors should have eigenvalue -1 in the symmetry transformation.
arXiv Detail & Related papers (2022-04-04T02:47:37Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Confidence-Optimal Random Embeddings [0.0]
This paper develops Johnson-Lindenstrauss distributions with optimal, data-oblivious, statistical confidence bounds.
The bounds are numerically best possible, for any given data dimension, embedding, and distortion tolerance.
They improve upon prior works in terms of statistical accuracy, as well as exactly determine the no-go regimes for data-oblivious approaches.
arXiv Detail & Related papers (2021-04-06T18:00:02Z) - On Random Matrices Arising in Deep Neural Networks: General I.I.D. Case [0.0]
We study the distribution of singular values of product of random matrices pertinent to the analysis of deep neural networks.
We use another, more streamlined, version of the techniques of random matrix theory to generalize the results of [22] to the case where the entries of the synaptic weight matrices are just independent identically distributed random variables with zero mean and finite fourth moment.
arXiv Detail & Related papers (2020-11-20T14:39:24Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Tackling small eigen-gaps: Fine-grained eigenvector estimation and
inference under heteroscedastic noise [28.637772416856194]
Two fundamental challenges arise in eigenvector estimation and inference for a low-rank matrix from noisy observations.
We propose estimation and uncertainty quantification procedures for an unknown eigenvector.
We establish optimal procedures to construct confidence intervals for the unknown eigenvalues.
arXiv Detail & Related papers (2020-01-14T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.