Distribution of lowest eigenvalue in $k$-body bosonic random matrix ensembles
- URL: http://arxiv.org/abs/2405.00190v3
- Date: Tue, 29 Oct 2024 17:52:46 GMT
- Title: Distribution of lowest eigenvalue in $k$-body bosonic random matrix ensembles
- Authors: N. D. Chavda, Priyanka Rao, V. K. B. Kota, Manan Vyas,
- Abstract summary: We numerically study the distribution of the lowest eigenvalue of finite many-boson systems with $k$-body interactions.
The first four moments of the distribution of lowest eigenvalues have been analyzed as a function of the $q$ parameter.
- Score: 0.8999666725996978
- License:
- Abstract: We numerically study the distribution of the lowest eigenvalue of finite many-boson systems with $k$-body interactions modeled by Bosonic Embedded Gaussian Orthogonal [BEGOE($k$)] and Unitary [BEGUE($k$)] random matrix Ensembles. Following the recently published result that the $q$-normal describes the smooth form of the eigenvalue density of the $k$-body embedded ensembles, the first four moments of the distribution of lowest eigenvalues have been analyzed as a function of the $q$ parameter, with $q \sim 1$ for $k = 1$ and $q = 0$ for $k = m$; $m$ being the number of bosons. Analytics are difficult as we are dealing with highly correlated variables, however we provide ansatzs for centroids and variances of these distributions. These match very well with the numerical results obtained. Our results show the distribution exhibits a smooth transition from Gaussian like for $q$ close to 1 to a modified Gumbel like for intermediate values of $q$ to the well-known Tracy-Widom distribution for $q=0$. It should be emphasized that this is a new result which numerically demonstrates that the distribution of the lowest eigenvalue of finite many-boson systems with $k$-body interactions exhibits a smooth transition from Gaussian like (for $q$ close to 1) to a modified Gumbel like (for intermediate values of $q$) to the well-known Tracy-Widom distribution (for $q=0$). In addition, we have also studied the distribution of normalized spacing between the lowest and next lowest eigenvalues and it is seen that this distribution exhibits a transition from Wigner's surmise (for $k=1$) to Poisson (for intermediate $k$ values with $k \le m/2$) to Wigner's surmise (starting from $k = m/2$ to $k = m$) with decreasing $q$ value. Thus, the spacings at the spectrum edge behave differently from the spacings inside the spectrum bulk.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Sum-of-squares lower bounds for Non-Gaussian Component Analysis [33.80749804695003]
Non-Gaussian Component Analysis (NGCA) is the statistical task of finding a non-Gaussian direction in a high-dimensional dataset.
Here we study the complexity of NGCA in the Sum-of-Squares framework.
arXiv Detail & Related papers (2024-10-28T18:19:13Z) - Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions [11.222970035173372]
kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1 t-fracd+22(tfracd2 vee 1)right)
We show that a kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1/2 t-fracd4right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian
arXiv Detail & Related papers (2024-02-23T20:51:31Z) - Identification of Mixtures of Discrete Product Distributions in
Near-Optimal Sample and Time Complexity [6.812247730094931]
We show, for any $ngeq 2k-1$, how to achieve sample complexity and run-time complexity $(1/zeta)O(k)$.
We also extend the known lower bound of $eOmega(k)$ to match our upper bound across a broad range of $zeta$.
arXiv Detail & Related papers (2023-09-25T09:50:15Z) - $L^1$ Estimation: On the Optimality of Linear Estimators [64.76492306585168]
This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian.
In particular, it is demonstrated that if the conditional distribution $P_X|Y=y$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution.
arXiv Detail & Related papers (2023-09-17T01:45:13Z) - Generalized Regret Analysis of Thompson Sampling using Fractional
Posteriors [12.43000662545423]
Thompson sampling (TS) is one of the most popular and earliest algorithms to solve multi-armed bandit problems.
We consider a variant of TS, named $alpha$-TS, where we use a fractional or $alpha$-posterior instead of the standard posterior distribution.
arXiv Detail & Related papers (2023-09-12T16:15:33Z) - Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex
Gaussian Perturbations [28.431572772564518]
We show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $tildeO(sqrtkd)$.
This improves on previous work that requires that the gap between every pair of top-$k$ eigenvalues of $M$ is at least $sqrtd$ for a similar bound.
arXiv Detail & Related papers (2023-06-29T03:18:53Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.