On the renormalization group fixed point of the two-dimensional Ising
model at criticality
- URL: http://arxiv.org/abs/2304.03224v1
- Date: Thu, 6 Apr 2023 16:57:28 GMT
- Title: On the renormalization group fixed point of the two-dimensional Ising
model at criticality
- Authors: Tobias J. Osborne and Alexander Stottmeister
- Abstract summary: We show that a simple, explicit analytic description of a fixed point using operator-algebraic renormalization (OAR) is possible.
Specifically, the fixed point is characterized in terms of spin-spin correlation functions.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyze the renormalization group fixed point of the two-dimensional Ising
model at criticality. In contrast with expectations from tensor network
renormalization (TNR), we show that a simple, explicit analytic description of
this fixed point using operator-algebraic renormalization (OAR) is possible.
Specifically, the fixed point is characterized in terms of spin-spin
correlation functions. Explicit error bounds for the approximation of continuum
correlation functions are given.
Related papers
- High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization [83.06112052443233]
This paper studies kernel ridge regression in high dimensions under covariate shifts.
By a bias-variance decomposition, we theoretically demonstrate that the re-weighting strategy allows for decreasing the variance.
For bias, we analyze the regularization of the arbitrary or well-chosen scale, showing that the bias can behave very differently under different regularization scales.
arXiv Detail & Related papers (2024-06-05T12:03:27Z) - Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation [12.455342327482223]
We present a generalised formulation of convergent regularisation in terms of critical points.
We show that this is achieved by a class of weakly convex regularisers.
Applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks.
arXiv Detail & Related papers (2024-02-01T22:54:45Z) - Renormalization Group Analysis of the Anderson Model on Random Regular Graphs [0.0]
We present a renormalization group analysis of the problem of Anderson localization on a Random Regular Graph (RRG)
We show that the one- parameter scaling hypothesis is recovered for sufficiently large system sizes for both eigenstates and spectrum observables.
We also explain the non-monotonic behavior of dynamical and spectral quantities as a function of the system size for values of disorder close to the transition.
arXiv Detail & Related papers (2023-06-26T18:00:13Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Local versions of sum-of-norms clustering [77.34726150561087]
We show that our method can separate arbitrarily close balls in the ball model.
We prove a quantitative bound on the error incurred in the clustering of disjoint connected sets.
arXiv Detail & Related papers (2021-09-20T14:45:29Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotics of Ridge (less) Regression under General Source Condition [26.618200633139256]
We consider the role played by the structure of the true regression parameter.
We show that (no regularisation) can be optimal even with bounded signal-to-noise ratio (SNR)
This contrasts with previous work considering ridge regression with isotropic prior, in which case is only optimal in the limit of infinite SNR.
arXiv Detail & Related papers (2020-06-11T13:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.