Analytical derivation and extension of the anti-Kibble-Zurek scaling in the transverse field Ising model
- URL: http://arxiv.org/abs/2404.17247v2
- Date: Mon, 27 May 2024 11:28:55 GMT
- Title: Analytical derivation and extension of the anti-Kibble-Zurek scaling in the transverse field Ising model
- Authors: Kaito Iwamura, Takayuki Suzuki,
- Abstract summary: We analytically investigate the effect of white noise on the transition probabilities of the Landau-Zener model.
Our analysis reveals that when the introduced noise is small, the model follows the previously known anti-Kibble-Zurek scaling.
On the other hand, as the noise increases, a new scaling behavior emerges.
- Score: 0.29465623430708904
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A defect density which quantifies the deviation from the spin ground state characterizes non-equilibrium dynamics during phase transitions. The widely recognized Kibble-Zurek scaling predicts how the defect density evolves during phase transitions. However, it can be perturbed by a noise, leading to the anti-Kibble-Zurek scaling. In this research, we analytically investigate the effect of Gaussian white noise on the transition probabilities of the Landau-Zener model. We apply this analysis to the one-dimensional transverse field Ising model and obtain an analytical approximate solution of the defect density. Our analysis reveals that when the introduced noise is small, the model follows the previously known anti-Kibble-Zurek scaling. On the other hand, as the noise increases, a new scaling behavior emerges. Furthermore, we identify the parameter that minimizes the defect density based on the new scaling, which allows us to verify how effective the already known scaling of the optimized parameter is.
Related papers
- Non-interferometric rotational test of the Continuous Spontaneous Localisation model: enhancement of the collapse noise through shape optimisation [0.0]
We derive an upper bound on the parameters of the Continuous Spontaneous Localisation model by applying it to the rotational noise measured in a recent short-distance gravity experiment.
We find that despite being a table-top experiment the bound is only one order of magnitude weaker than that from LIGO for the relevant values of the collapse parameter.
arXiv Detail & Related papers (2024-02-20T14:52:00Z) - Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection [2.209921757303168]
Diffusion models have found valuable applications in anomaly detection by capturing the nominal data distribution and identifying anomalies via reconstruction.
Despite their merits, they struggle to localize anomalies of varying scales, especially larger anomalies such as entire missing components.
We present a novel framework that enhances the capability of diffusion models, by extending the previous introduced implicit conditioning approach Meng et al.
2022 in three significant ways.
arXiv Detail & Related papers (2024-01-09T09:57:38Z) - Anomaly Detection with Variance Stabilized Density Estimation [49.46356430493534]
We present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples.
To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution.
We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results.
arXiv Detail & Related papers (2023-06-01T11:52:58Z) - Doubly Stochastic Models: Learning with Unbiased Label Noises and
Inference Stability [85.1044381834036]
We investigate the implicit regularization effects of label noises under mini-batch sampling settings of gradient descent.
We find such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters.
Our work doesn't assume SGD as an Ornstein-Uhlenbeck like process and achieve a more general result with convergence of approximation proved.
arXiv Detail & Related papers (2023-04-01T14:09:07Z) - High-Order Qubit Dephasing at Sweet Spots by Non-Gaussian Fluctuators:
Symmetry Breaking and Floquet Protection [55.41644538483948]
We study the qubit dephasing caused by the non-Gaussian fluctuators.
We predict a symmetry-breaking effect that is unique to the non-Gaussian noise.
arXiv Detail & Related papers (2022-06-06T18:02:38Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Revisiting the Characteristics of Stochastic Gradient Noise and Dynamics [25.95229631113089]
We show that the gradient noise possesses finite variance, and therefore the Central Limit Theorem (CLT) applies.
We then demonstrate the existence of the steady-state distribution of gradient descent and approximate the distribution at a small learning rate.
arXiv Detail & Related papers (2021-09-20T20:39:14Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.