Machine learning a fixed point action for SU(3) gauge theory with a gauge equivariant convolutional neural network
- URL: http://arxiv.org/abs/2401.06481v2
- Date: Wed, 02 Oct 2024 22:51:02 GMT
- Title: Machine learning a fixed point action for SU(3) gauge theory with a gauge equivariant convolutional neural network
- Authors: Kieran Holland, Andreas Ipp, David I. Müller, Urs Wenger,
- Abstract summary: Fixed point lattice actions are designed to have continuum classical properties unaffected by discretization effects and reduced lattice artifacts at the quantum level.
Here we use machine learning methods to revisit the question of how to parametrize fixed point actions.
- Score: 0.0
- License:
- Abstract: Fixed point lattice actions are designed to have continuum classical properties unaffected by discretization effects and reduced lattice artifacts at the quantum level. They provide a possible way to extract continuum physics with coarser lattices, thereby allowing one to circumvent problems with critical slowing down and topological freezing toward the continuum limit. A crucial ingredient for practical applications is to find an accurate and compact parametrization of a fixed point action, since many of its properties are only implicitly defined. Here we use machine learning methods to revisit the question of how to parametrize fixed point actions. In particular, we obtain a fixed point action for four-dimensional SU(3) gauge theory using convolutional neural networks with exact gauge invariance. The large operator space allows us to find superior parametrizations compared to previous studies, a necessary first step for future Monte Carlo simulations and scaling studies.
Related papers
- CWF: Consolidating Weak Features in High-quality Mesh Simplification [50.634070540791555]
We propose a smooth functional that simultaneously considers all of these requirements.
The functional comprises a normal anisotropy term and a Centroidal Voronoi Tessellation (CVT) energy term.
arXiv Detail & Related papers (2024-04-24T05:37:17Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Third quantization of open quantum systems: new dissipative symmetries
and connections to phase-space and Keldysh field theory formulations [77.34726150561087]
We reformulate the technique of third quantization in a way that explicitly connects all three methods.
We first show that our formulation reveals a fundamental dissipative symmetry present in all quadratic bosonic or fermionic Lindbladians.
For bosons, we then show that the Wigner function and the characteristic function can be thought of as ''wavefunctions'' of the density matrix.
arXiv Detail & Related papers (2023-02-27T18:56:40Z) - Universality of critical dynamics with finite entanglement [68.8204255655161]
We study how low-energy dynamics of quantum systems near criticality are modified by finite entanglement.
Our result establishes the precise role played by entanglement in time-dependent critical phenomena.
arXiv Detail & Related papers (2023-01-23T19:23:54Z) - Optimizing one-axis twists for variational Bayesian quantum metrology [0.0]
In particular, qubit phase estimation, or rotation sensing, appears as a ubiquitous problem with applications to electric field sensing, magnetometry, atomic clocks, and gyroscopes.
We propose a new family of parametrized encoding and decoding protocols called arbitrary-axis twist ansatzes.
We show that it can lead to a substantial reduction in the number of one-axis twists needed to achieve a target estimation error.
arXiv Detail & Related papers (2022-12-23T16:45:15Z) - Detecting Rotated Objects as Gaussian Distributions and Its 3-D
Generalization [81.29406957201458]
Existing detection methods commonly use a parameterized bounding box (BBox) to model and detect (horizontal) objects.
We argue that such a mechanism has fundamental limitations in building an effective regression loss for rotation detection.
We propose to model the rotated objects as Gaussian distributions.
We extend our approach from 2-D to 3-D with a tailored algorithm design to handle the heading estimation.
arXiv Detail & Related papers (2022-09-22T07:50:48Z) - A view of mini-batch SGD via generating functions: conditions of
convergence, phase transitions, benefit from negative momenta [14.857119814202754]
Mini-batch SGD with momentum is a fundamental algorithm for learning large predictive models.
We develop a new analytic framework to analyze mini-batch SGD for linear models at different momenta and sizes of batches.
arXiv Detail & Related papers (2022-06-22T14:15:35Z) - GeoMol: Torsional Geometric Generation of Molecular 3D Conformer
Ensembles [60.12186997181117]
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
Existing generative models have several drawbacks including lack of modeling important molecular geometry elements.
We propose GeoMol, an end-to-end, non-autoregressive and SE(3)-invariant machine learning approach to generate 3D conformers.
arXiv Detail & Related papers (2021-06-08T14:17:59Z) - Machine Learning and Variational Algorithms for Lattice Field Theory [1.198562319289569]
In lattice quantum field theory studies, parameters defining the lattice theory must be tuned toward criticality to access continuum physics.
We introduce an approach to "deform" Monte Carlo estimators based on contour deformations applied to the domain of the path integral.
We demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications.
arXiv Detail & Related papers (2021-06-03T16:37:05Z) - Machine-learning physics from unphysics: Finding deconfinement
temperature in lattice Yang-Mills theories from outside the scaling window [0.0]
We study the machine learning techniques applied to the lattice gauge theory's critical behavior.
We find that the neural network, trained on lattice configurations of gauge fields at an unphysical value of the lattice parameters as an input, builds up a gauge-invariant function.
arXiv Detail & Related papers (2020-09-23T07:21:40Z) - Topological defects and confinement with machine learning: the case of
monopoles in compact electrodynamics [0.0]
We train a neural network with a set of monopole configurations to distinguish between confinement and deconfinement phases.
We show that the model can determine the transition temperature with accuracy, which depends on the criteria implemented in the algorithm.
arXiv Detail & Related papers (2020-06-16T12:41:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.