Analytical derivation and extension of the anti-Kibble-Zurek scaling in the transverse field Ising model
- URL: http://arxiv.org/abs/2404.17247v3
- Date: Mon, 23 Sep 2024 06:32:44 GMT
- Title: Analytical derivation and extension of the anti-Kibble-Zurek scaling in the transverse field Ising model
- Authors: Kaito Iwamura, Takayuki Suzuki,
- Abstract summary: A defect density which quantifies the deviation from the spin ground state characterizes non-equilibrium dynamics during phase transitions.
The widely recognized Kibble-Zurek scaling predicts how the defect density evolves during phase transitions.
However, it can be perturbed by a noise, leading to the anti-Kibble-Zurek scaling.
- Score: 0.29465623430708904
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A defect density which quantifies the deviation from the spin ground state characterizes non-equilibrium dynamics during phase transitions. The widely recognized Kibble-Zurek scaling predicts how the defect density evolves during phase transitions. However, it can be perturbed by a noise, leading to the anti-Kibble-Zurek scaling. In this research, we analytically investigate the effect of Gaussian white noise on the transition probabilities of the Landau-Zener model. We apply this analysis to the one-dimensional transverse field Ising model and obtain an analytical approximate solution of the defect density. Our analysis reveals that when the introduced noise is small, the model follows the previously known anti-Kibble-Zurek scaling. Conversely, when the noise increases, the scaling can be obtained by using the adiabatic approximation. This result indicates that deriving the anti-Kibble-Zurek scaling does not require solving differential equations, instead, it can be achieved simply by applying the adiabatic approximation. Furthermore, we identify the parameter that minimizes the defect density based on the new scaling, which allows us to verify how effective the already known scaling of the optimized parameter is.
Related papers
- Anomaly Detection with Variance Stabilized Density Estimation [49.46356430493534]
We present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples.
To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution.
We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results.
arXiv Detail & Related papers (2023-06-01T11:52:58Z) - Hotelling Deflation on Large Symmetric Spiked Tensors [10.706763980556445]
We provide a precise characterization of the large-dimensional performance of deflation in terms of the alignments of the vectors obtained by successive rank-1 approximation.
Our analysis allows an understanding of the deflation mechanism in the presence of noise and can be exploited for designing more efficient signal estimation methods.
arXiv Detail & Related papers (2023-04-20T12:16:05Z) - Doubly Stochastic Models: Learning with Unbiased Label Noises and
Inference Stability [85.1044381834036]
We investigate the implicit regularization effects of label noises under mini-batch sampling settings of gradient descent.
We find such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters.
Our work doesn't assume SGD as an Ornstein-Uhlenbeck like process and achieve a more general result with convergence of approximation proved.
arXiv Detail & Related papers (2023-04-01T14:09:07Z) - Frequency estimation under non-Markovian spatially correlated quantum
noise: Restoring superclassical precision scaling [0.0]
We study the Ramsey estimation precision attainable by entanglement-enhanced interferometry in the presence of correlatedly non-classical noise.
In a paradigmatic case of spin-boson dephasovian noise from a thermal environment, we find that it is possible to suppress, on average, the effect of correlations by randomizing the location of probes.
arXiv Detail & Related papers (2022-04-22T16:25:16Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Revisiting the Characteristics of Stochastic Gradient Noise and Dynamics [25.95229631113089]
We show that the gradient noise possesses finite variance, and therefore the Central Limit Theorem (CLT) applies.
We then demonstrate the existence of the steady-state distribution of gradient descent and approximate the distribution at a small learning rate.
arXiv Detail & Related papers (2021-09-20T20:39:14Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Interpolation and Learning with Scale Dependent Kernels [91.41836461193488]
We study the learning properties of nonparametric ridge-less least squares.
We consider the common case of estimators defined by scale dependent kernels.
arXiv Detail & Related papers (2020-06-17T16:43:37Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.