Machine-learning physics from unphysics: Finding deconfinement
temperature in lattice Yang-Mills theories from outside the scaling window
- URL: http://arxiv.org/abs/2009.10971v2
- Date: Sat, 24 Oct 2020 07:53:49 GMT
- Title: Machine-learning physics from unphysics: Finding deconfinement
temperature in lattice Yang-Mills theories from outside the scaling window
- Authors: D.L. Boyda, M.N. Chernodub, N.V. Gerasimeniuk, V.A. Goy, S.D.
Liubimov, A.V. Molochkov
- Abstract summary: We study the machine learning techniques applied to the lattice gauge theory's critical behavior.
We find that the neural network, trained on lattice configurations of gauge fields at an unphysical value of the lattice parameters as an input, builds up a gauge-invariant function.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the machine learning techniques applied to the lattice gauge
theory's critical behavior, particularly to the confinement/deconfinement phase
transition in the SU(2) and SU(3) gauge theories. We find that the neural
network, trained on lattice configurations of gauge fields at an unphysical
value of the lattice parameters as an input, builds up a gauge-invariant
function, and finds correlations with the target observable that is valid in
the physical region of the parameter space. In particular, if the algorithm
aimed to predict the Polyakov loop as the deconfining order parameter, it
builds a trace of the gauge group matrices along a closed loop in the time
direction. As a result, the neural network, trained at one unphysical value of
the lattice coupling $\beta$ predicts the order parameter in the whole region
of the $\beta$ values with good precision. We thus demonstrate that the machine
learning techniques may be used as a numerical analog of the analytical
continuation from easily accessible but physically uninteresting regions of the
coupling space to the interesting but potentially not accessible regions.
Related papers
- KPZ scaling from the Krylov space [83.88591755871734]
Recently, a superdiffusion exhibiting the Kardar-Parisi-Zhang scaling in late-time correlators and autocorrelators has been reported.
Inspired by these results, we explore the KPZ scaling in correlation functions using their realization in the Krylov operator basis.
arXiv Detail & Related papers (2024-06-04T20:57:59Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Machine learning a fixed point action for SU(3) gauge theory with a gauge equivariant convolutional neural network [0.0]
Fixed point lattice actions are designed to have continuum classical properties unaffected by discretization effects and reduced lattice artifacts at the quantum level.
Here we use machine learning methods to revisit the question of how to parametrize fixed point actions.
arXiv Detail & Related papers (2024-01-12T10:03:00Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Deep Quantum Neural Networks are Gaussian Process [0.0]
We present a framework to examine the impact of finite width in the closed-form relationship using a $ 1/d$ expansion.
We elucidate the relationship between GP and its parameter space equivalent, characterized by the Quantum Neural Tangent Kernels (QNTK)
arXiv Detail & Related papers (2023-05-22T03:07:43Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Adding machine learning within Hamiltonians: Renormalization group
transformations, symmetry breaking and restoration [0.0]
We include the predictive function of a neural network, designed for phase classification, as a conjugate variable coupled to an external field within the Hamiltonian of a system.
Results show that the field can induce an order-disorder phase transition by breaking or restoring the symmetry.
We conclude by discussing how the method provides an essential step toward bridging machine learning and physics.
arXiv Detail & Related papers (2020-09-30T18:44:18Z) - Topological defects and confinement with machine learning: the case of
monopoles in compact electrodynamics [0.0]
We train a neural network with a set of monopole configurations to distinguish between confinement and deconfinement phases.
We show that the model can determine the transition temperature with accuracy, which depends on the criteria implemented in the algorithm.
arXiv Detail & Related papers (2020-06-16T12:41:19Z) - Extending machine learning classification capabilities with histogram
reweighting [0.0]
We propose the use of Monte Carlo histogram reweighting to extrapolate predictions of machine learning methods.
We treat the output from a convolutional neural network as an observable in a statistical system, enabling its extrapolation over continuous ranges in parameter space.
arXiv Detail & Related papers (2020-04-29T17:20:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.