Statistical physics of principal minors: Cavity approach
- URL: http://arxiv.org/abs/2405.19904v1
- Date: Thu, 30 May 2024 10:09:49 GMT
- Title: Statistical physics of principal minors: Cavity approach
- Authors: A. Ramezanpour, M. A. Rajabpour,
- Abstract summary: We compute the sum of powers of principal minors of a matrix.
This is relevant to the study of critical behaviors in quantum fermionic systems.
We show that no (finite-temperature) phase transition is observed in this class of diagonally dominant matrices.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Determinants are useful to represent the state of an interacting system of (effectively) repulsive and independent elements, like fermions in a quantum system and training samples in a learning problem. A computationally challenging problem is to compute the sum of powers of principal minors of a matrix which is relevant to the study of critical behaviors in quantum fermionic systems and finding a subset of maximally informative training data for a learning algorithm. Specifically, principal minors of positive square matrices can be considered as statistical weights of a random point process on the set of the matrix indices. The probability of each subset of the indices is in general proportional to a positive power of the determinant of the associated sub-matrix. We use Gaussian representation of the determinants for symmetric and positive matrices to estimate the partition function (or free energy) and the entropy of principal minors within the Bethe approximation. The results are expected to be asymptotically exact for diagonally dominant matrices with locally tree-like structures. We consider the Laplacian matrix of random regular graphs of degree $K=2,3,4$ and exactly characterize the structure of the relevant minors in a mean-field model of such matrices. No (finite-temperature) phase transition is observed in this class of diagonally dominant matrices by increasing the positive power of the principal minors, which here plays the role of an inverse temperature.
Related papers
- Resolvent-based quantum phase estimation: Towards estimation of parametrized eigenvalues [0.0]
We propose a novel approach for estimating the eigenvalues of non-normal matrices based on the matrix resolvent formalism.
We construct the first efficient algorithm for estimating the phases of the unit-norm eigenvalues of a given non-unitary matrix.
We then construct an efficient algorithm for estimating the real eigenvalues of a given non-Hermitian matrix.
arXiv Detail & Related papers (2024-10-07T08:51:05Z) - Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices [20.20416580970697]
This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
arXiv Detail & Related papers (2022-08-25T10:12:53Z) - An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
Matrices with Polynomial Scalings [21.727073594338297]
This study is motivated by applications in machine learning and statistics.
We establish the weak limit of the empirical distribution of these random matrices in a scaling regime.
Our results can be characterized as the free additive convolution between a Marchenko-Pastur law and a semicircle law.
arXiv Detail & Related papers (2022-05-12T18:50:21Z) - Riemannian statistics meets random matrix theory: towards learning from
high-dimensional covariance matrices [2.352645870795664]
This paper shows that there seems to exist no practical method of computing the normalising factors associated with Riemannian Gaussian distributions on spaces of high-dimensional covariance matrices.
It is shown that this missing method comes from an unexpected new connection with random matrix theory.
Numerical experiments are conducted which demonstrate how this new approximation can unlock the difficulties which have impeded applications to real-world datasets.
arXiv Detail & Related papers (2022-03-01T03:16:50Z) - Level compressibility of certain random unitary matrices [0.0]
The value of spectral form factor at the origin, called level compressibility, is an important characteristic of random spectra.
The paper is devoted to analytical calculations of this quantity for different random unitary matrices describing models with intermediate spectral statistics.
arXiv Detail & Related papers (2022-02-22T21:31:24Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.