Distribution of lowest eigenvalue in $k$-body bosonic random matrix ensembles
- URL: http://arxiv.org/abs/2405.00190v4
- Date: Wed, 30 Jul 2025 20:00:21 GMT
- Title: Distribution of lowest eigenvalue in $k$-body bosonic random matrix ensembles
- Authors: N. D. Chavda, Priyanka Rao, V. K. B. Kota, Manan Vyas,
- Abstract summary: We show the distribution of the lowest eigenvalue of finite many-boson systems with $m$ number of bosons.<n>We analyze these transitions as a function of $q$ parameter defining $q$-normal distribution for eigenvalue densities.
- Score: 0.8999666725996978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present numerical investigations demonstrating the result that the distribution of the lowest eigenvalue of finite many-boson systems (say we have $m$ number of bosons) with $k$-body interactions, modeled by Bosonic Embedded Gaussian Orthogonal [BEGOE($k$)] and Unitary [BEGUE($k$)] random matrix Ensembles of $k$-body interactions, exhibits a smooth transition from Gaussian like (for $k = 1$) to a modified Gumbel like (for intermediate values of $k$) to the well-known Tracy-Widom distribution (for $k = m$) form. We also provide ansatz for centroids and variances of the lowest eigenvalue distributions. In addition, we show that the distribution of normalized spacing between the lowest and the next lowest eigenvalues exhibits a transition from Wigner's surmise (for $k = 1$) to Poisson (for intermediate $k$ values with $k \le m/2$) to Wigner's surmise (starting from $k = m/2$ to $k = m$) form. We analyze these transitions as a function of $q$ parameter defining $q$-normal distribution for eigenvalue densities.
Related papers
- Anisotropic local law for non-separable sample covariance matrices [10.181748307494608]
We establish local laws for sample covariance matrices $K = N-1sum_i=1N g_ig_ig_i*$ where the random vectors $g_1, ldots, g_N in Rn$ are independent with common covariance $$.<n>We discuss several classes of non-separable examples satisfying our assumptions, including conditionally mean-zero distributions, the random features model $g = (Xw)$ arising in machine learning, and Gaussian measures with
arXiv Detail & Related papers (2026-02-20T03:28:51Z) - Global law of conjugate kernel random matrices with heavy-tailed weights [1.8416014644193066]
We study the spectral behavior of the conjugate kernel random matrix $YYtop$, where $Y= f(WX)$ arises from a two-layer neural network model.
We show that heavy-tailed weights induce strong correlations between the entries of $Y$, leading to richer and fundamentally different spectral behavior compared to models with light-tailed weights.
arXiv Detail & Related papers (2025-02-25T18:22:58Z) - Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Sum-of-squares lower bounds for Non-Gaussian Component Analysis [33.80749804695003]
Non-Gaussian Component Analysis (NGCA) is the statistical task of finding a non-Gaussian direction in a high-dimensional dataset.
Here we study the complexity of NGCA in the Sum-of-Squares framework.
arXiv Detail & Related papers (2024-10-28T18:19:13Z) - Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions [11.222970035173372]
kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1 t-fracd+22(tfracd2 vee 1)right)
We show that a kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1/2 t-fracd4right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian
arXiv Detail & Related papers (2024-02-23T20:51:31Z) - Identification of Mixtures of Discrete Product Distributions in
Near-Optimal Sample and Time Complexity [6.812247730094931]
We show, for any $ngeq 2k-1$, how to achieve sample complexity and run-time complexity $(1/zeta)O(k)$.
We also extend the known lower bound of $eOmega(k)$ to match our upper bound across a broad range of $zeta$.
arXiv Detail & Related papers (2023-09-25T09:50:15Z) - $L^1$ Estimation: On the Optimality of Linear Estimators [64.76492306585168]
This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian.
In particular, it is demonstrated that if the conditional distribution $P_X|Y=y$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution.
arXiv Detail & Related papers (2023-09-17T01:45:13Z) - Generalized Regret Analysis of Thompson Sampling using Fractional
Posteriors [12.43000662545423]
Thompson sampling (TS) is one of the most popular and earliest algorithms to solve multi-armed bandit problems.
We consider a variant of TS, named $alpha$-TS, where we use a fractional or $alpha$-posterior instead of the standard posterior distribution.
arXiv Detail & Related papers (2023-09-12T16:15:33Z) - Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex
Gaussian Perturbations [28.431572772564518]
We show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $tildeO(sqrtkd)$.
This improves on previous work that requires that the gap between every pair of top-$k$ eigenvalues of $M$ is at least $sqrtd$ for a similar bound.
arXiv Detail & Related papers (2023-06-29T03:18:53Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - Bivariate moments of the two-point correlation function for embedded
Gaussian unitary ensemble with $k$-body interactions [0.0]
Two-point correlation function in eigenvalues of a random matrix ensemble is the ensemble average of the product of the density of eigenvalues at two eigenvalues.
Fluctuation measures such as the number variance and Dyson-Mehta $Delta_3$ statistic are defined by the two-point function.
arXiv Detail & Related papers (2022-08-24T05:37:47Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Exact eigenvalue order statistics for the reduced density matrix of a
bipartite system [0.0]
eigenvalues $lambda_1(m),ldots, lambda_m(m)$ of $rho_A(m)$ are correlated random variables because their sum equals unity.
We numerically generate histograms of the ordered set of eigenvalues corresponding to ensembles of over $105$ random complex pure states of the bipartite system.
arXiv Detail & Related papers (2021-10-03T15:17:12Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Learning Entangled Single-Sample Gaussians in the Subset-of-Signals
Model [28.839136703139225]
We study mean estimation for entangled single-sample Gaussians with a common mean but different unknown variances.
We show that the method achieves error $O left(fracsqrtnln nmright)$ with high probability when $m=Omega(sqrtnln n)$.
We further prove lower bounds, showing that the error is $Omegaleft(left(fracnm4right)1/6right)$ when $m$ is between $Omega(n
arXiv Detail & Related papers (2020-07-10T18:25:38Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.