Statistics of the Spectral Form Factor in the Self-Dual Kicked Ising
Model
- URL: http://arxiv.org/abs/2009.03199v2
- Date: Thu, 3 Dec 2020 09:48:47 GMT
- Title: Statistics of the Spectral Form Factor in the Self-Dual Kicked Ising
Model
- Authors: Ana Flack, Bruno Bertini, Tomaz Prosen
- Abstract summary: We show that at large enough times the probability distribution agrees exactly with the prediction of Random Matrix Theory.
This behaviour is due to a recently identified additional anti-unitary symmetry of the self-dual kicked Ising model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We compute the full probability distribution of the spectral form factor in
the self-dual kicked Ising model by providing an exact lower bound for each
moment and verifying numerically that the latter is saturated. We show that at
large enough times the probability distribution agrees exactly with the
prediction of Random Matrix Theory if one identifies the appropriate ensemble
of random matrices. We find that this ensemble is not the circular orthogonal
one - composed of symmetric random unitary matrices and associated with
time-reversal-invariant evolution operators - but is an ensemble of random
matrices on a more restricted symmetric space (depending on the parity of the
number of sites this space is either ${Sp(N)/U(N)}$ or
${O(2N)/{O(N)\!\times\!O(N)}}$). Even if the latter ensembles yield the same
averaged spectral form factor as the circular orthogonal ensemble they show
substantially enhanced fluctuations. This behaviour is due to a recently
identified additional anti-unitary symmetry of the self-dual kicked Ising
model.
Related papers
- Deviations from random matrix entanglement statistics for kicked quantum chaotic spin-$1/2$ chains [0.0]
It is commonly expected that for quantum chaotic body systems the statistical properties approach those of random matrices when increasing the system size.
We demonstrate for various kicked spin-$1/2$ chain models that the average eigenstate entanglement indeed approaches the random matrix result.
While for autonomous systems such deviations are expected, they are surprising for the more scrambling kicked systems.
arXiv Detail & Related papers (2024-05-13T08:27:07Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Quantitative deterministic equivalent of sample covariance matrices with
a general dependence structure [0.0]
We prove quantitative bounds involving both the dimensions and the spectral parameter, in particular allowing it to get closer to the real positive semi-line.
As applications, we obtain a new bound for the convergence in Kolmogorov distance of the empirical spectral distributions of these general models.
arXiv Detail & Related papers (2022-11-23T15:50:31Z) - Riemannian statistics meets random matrix theory: towards learning from
high-dimensional covariance matrices [2.352645870795664]
This paper shows that there seems to exist no practical method of computing the normalising factors associated with Riemannian Gaussian distributions on spaces of high-dimensional covariance matrices.
It is shown that this missing method comes from an unexpected new connection with random matrix theory.
Numerical experiments are conducted which demonstrate how this new approximation can unlock the difficulties which have impeded applications to real-world datasets.
arXiv Detail & Related papers (2022-03-01T03:16:50Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Riemannian Gaussian distributions, random matrix ensembles and diffusion
kernels [0.0]
We show how to compute marginals of the probability density functions on a random matrix type of symmetric spaces.
We also show how the probability density functions are a particular case of diffusion kernels of the Karlin-McGregor type, describing non-intersecting processes in the Weyl chamber of Lie groups.
arXiv Detail & Related papers (2020-11-27T11:41:29Z) - Graph Gamma Process Generalized Linear Dynamical Systems [60.467040479276704]
We introduce graph gamma process (GGP) linear dynamical systems to model real multivariate time series.
For temporal pattern discovery, the latent representation under the model is used to decompose the time series into a parsimonious set of multivariate sub-sequences.
We use the generated random graph, whose number of nonzero-degree nodes is finite, to define both the sparsity pattern and dimension of the latent state transition matrix.
arXiv Detail & Related papers (2020-07-25T04:16:34Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - On Random Matrices Arising in Deep Neural Networks. Gaussian Case [1.6244541005112747]
The paper deals with distribution of singular values of product of random matrices arising in the analysis of deep neural networks.
The problem has been considered in recent work by using the techniques of free probability theory.
arXiv Detail & Related papers (2020-01-17T08:30:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.