Krylov complexity of many-body localization: Operator localization in
Krylov basis
- URL: http://arxiv.org/abs/2112.04722v2
- Date: Thu, 16 Jun 2022 16:21:17 GMT
- Title: Krylov complexity of many-body localization: Operator localization in
Krylov basis
- Authors: Fabian Ballar Trigueros, Cheng-Ju Lin
- Abstract summary: We study the operator growth problem and its complexity in the many-body localization (MBL) system from the Lanczos perspective.
Using the Krylov basis, the operator growth problem can be viewed as a single-particle hopping problem on a semi-infinite chain.
Our numerical results suggest that the emergent single-particle hopping problem in the MBL system is localized when on the first site.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the operator growth problem and its complexity in the many-body
localization (MBL) system from the Lanczos algorithm perspective. Using the
Krylov basis, the operator growth problem can be viewed as a single-particle
hopping problem on a semi-infinite chain with the hopping amplitudes given by
the Lanczos coefficients. We find that, in the MBL systems, the Lanczos
coefficients scale as $\sim n/\ln(n)$ asymptotically, same as in the ergodic
systems, but with an additional even-odd alteration and an effective
randomness. We use a simple linear extrapolation scheme as an attempt to
extrapolate the Lanczos coefficients to the thermodynamic limit. With the
original and extrapolated Lanczos coefficients, we study the properties of the
emergent single-particle hopping problem via its spectral function, integrals
of motion, Krylov complexity, wavefunction profile and return probability. Our
numerical results of the above quantities suggest that the emergent
single-particle hopping problem in the MBL system is localized when initialized
on the first site. We also study the operator growth in the MBL
phenomenological model, whose Lanczos coefficients also have an even-odd
alteration, but approach constants asymptotically. The Krylov complexity grows
linearly in time in this case.
Related papers
- Krylov complexity of fermion chain in double-scaled SYK and power spectrum perspective [0.0]
We investigate Krylov complexity of the fermion chain operator which consists of multiple Majorana fermions in the double-scaled SYK (DSSYK) model with finite temperature.
Using the fact that Krylov complexity is computable from two-point functions, the analysis is performed in the limit where the two-point function becomes simple.
We confirm the exponential growth of Krylov complexity in the very low temperature regime.
arXiv Detail & Related papers (2024-07-18T08:47:05Z) - KPZ scaling from the Krylov space [83.88591755871734]
Recently, a superdiffusion exhibiting the Kardar-Parisi-Zhang scaling in late-time correlators and autocorrelators has been reported.
Inspired by these results, we explore the KPZ scaling in correlation functions using their realization in the Krylov operator basis.
arXiv Detail & Related papers (2024-06-04T20:57:59Z) - Operator dynamics in Lindbladian SYK: a Krylov complexity perspective [0.0]
We analytically establish the linear growth of two sets of coefficients for any generic jump operators.
We find that the Krylov complexity saturates inversely with the dissipation strength, while the dissipative timescale grows logarithmically.
arXiv Detail & Related papers (2023-11-01T18:00:06Z) - The operator growth hypothesis in open quantum systems [0.0]
The operator growth hypothesis (OGH) is a conjecture about the behaviour of operators under repeated action by a Liouvillian.
Here we investigate the generalisation of OGH to open quantum systems, where the Liouvillian is replaced by a Lindbladian.
arXiv Detail & Related papers (2023-10-23T21:20:19Z) - Operator growth and Krylov Complexity in Bose-Hubbard Model [0.25602836891933073]
We study Krylov complexity of a one-dimensional Bosonic system, the celebrated Bose-Hubbard Model.
We use the Lanczos algorithm to find the Lanczos coefficients and the Krylov basis.
Our results capture the chaotic and integrable nature of the system.
arXiv Detail & Related papers (2023-06-08T20:24:03Z) - Krylov Complexity in Calabi-Yau Quantum Mechanics [0.0]
We study Krylov complexity in quantum mechanical systems derived from some well-known local toric Calabi-Yau geometries.
We find that for the Calabi-Yau models, the Lanczos coefficients grow slower than linearly for small $n$'s, consistent with the behavior of integrable models.
arXiv Detail & Related papers (2022-12-06T12:32:04Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Determination of the critical exponents in dissipative phase
transitions: Coherent anomaly approach [51.819912248960804]
We propose a generalization of the coherent anomaly method to extract the critical exponents of a phase transition occurring in the steady-state of an open quantum many-body system.
arXiv Detail & Related papers (2021-03-12T13:16:18Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.