Accuracy and precision of the estimation of the number of missing levels
in chaotic spectra using long-range correlations
- URL: http://arxiv.org/abs/2011.01667v1
- Date: Tue, 3 Nov 2020 12:42:07 GMT
- Title: Accuracy and precision of the estimation of the number of missing levels
in chaotic spectra using long-range correlations
- Authors: I. Casal, L. Mu\~noz and R.A. Molina
- Abstract summary: We study the accuracy and precision for estimating the fraction of observed levels $varphi$ in quantum chaotic spectra through long-range correlations.
We use Monte Carlo simulations of the spectra from the diagonalization of Gaussian Orthogonal Ensemble matrices with a definite number of levels randomly taken out to fit the formulas.
The precision is generally better for the estimation using the power spectrum of the $delta_n$ as compared to the estimation using the $Delta_3$ statistic.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the accuracy and precision for estimating the fraction of observed
levels $\varphi$ in quantum chaotic spectra through long-range correlations. We
focus on the main statistics where theoretical formulas for the fraction of
missing levels have been derived, the $\Delta_3$ of Dyson and Mehta and the
power spectrum of the $\delta_n$ statistic. We use Monte Carlo simulations of
the spectra from the diagonalization of Gaussian Orthogonal Ensemble matrices
with a definite number of levels randomly taken out to fit the formulas and
calculate the distribution of the estimators for different sizes of the
spectrum and values of $\varphi$. A proper averaging of the power spectrum of
the $\delta_n$ statistic needs to be performed for avoiding systematic errors
in the estimation. Once the proper averaging is made the estimation of the
fraction of observed levels has quite good accuracy for the two methods even
for the lowest dimensions we consider $d=100$. However, the precision is
generally better for the estimation using the power spectrum of the $\delta_n$
as compared to the estimation using the $\Delta_3$ statistic. This difference
is clearly bigger for larger dimensions. Our results show that a careful
analysis of the value of the fit in view of the ensemble distribution of the
estimations is mandatory for understanding its actual significance and give a
realistic error interval.
Related papers
- Relative-Translation Invariant Wasserstein Distance [82.6068808353647]
We introduce a new family of distances, relative-translation invariant Wasserstein distances ($RW_p$)
We show that $RW_p distances are also real distance metrics defined on the quotient set $mathcalP_p(mathbbRn)/sim$ invariant to distribution translations.
arXiv Detail & Related papers (2024-09-04T03:41:44Z) - Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias [13.642712817536072]
We show that as the dimension $d$ of the problem increases, the number of iterations required to ensure convergence within a desired error increases.
A key technical challenge we address is the lack of a one-step contraction property in the $W_2,ellinfty$ metric to measure convergence.
arXiv Detail & Related papers (2024-08-20T01:24:54Z) - Optimal score estimation via empirical Bayes smoothing [13.685846094715364]
We study the problem of estimating the score function of an unknown probability distribution $rho*$ from $n$ independent and identically distributed observations in $d$ dimensions.
We show that a regularized score estimator based on a Gaussian kernel attains this rate, shown optimal by a matching minimax lower bound.
arXiv Detail & Related papers (2024-02-12T16:17:40Z) - Unbiased Kinetic Langevin Monte Carlo with Inexact Gradients [0.8749675983608172]
We present an unbiased method for posterior means based on kinetic Langevin dynamics.
Our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem.
Our results demonstrate that in large-scale applications, the unbiased algorithm we present can be 2-3 orders of magnitude more efficient than the gold-standard" randomized Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2023-11-08T21:19:52Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Optimal Sub-Gaussian Mean Estimation in $\mathbb{R}$ [5.457150493905064]
We present a novel estimator with sub-Gaussian convergence.
Our estimator does not require prior knowledge of the variance.
Our estimator construction and analysis gives a framework generalizable to other problems.
arXiv Detail & Related papers (2020-11-17T02:47:24Z) - Debiasing Distributed Second Order Optimization with Surrogate Sketching
and Scaled Regularization [101.5159744660701]
In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data.
Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods.
arXiv Detail & Related papers (2020-07-02T18:08:14Z) - Faster Wasserstein Distance Estimation with the Sinkhorn Divergence [0.0]
The squared Wasserstein distance is a quantity to compare probability distributions in a non-parametric setting.
In this work, we propose instead to estimate it with the Sinkhorn divergence.
We show that, for smooth densities, this estimator has a comparable sample complexity but allows higher regularization levels.
arXiv Detail & Related papers (2020-06-15T06:58:16Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z) - Spectral density estimation with the Gaussian Integral Transform [91.3755431537592]
spectral density operator $hatrho(omega)=delta(omega-hatH)$ plays a central role in linear response theory.
We describe a near optimal quantum algorithm providing an approximation to the spectral density.
arXiv Detail & Related papers (2020-04-10T03:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.