Exploring gauge-fixing conditions with gradient-based optimization
- URL: http://arxiv.org/abs/2410.03602v1
- Date: Fri, 4 Oct 2024 17:01:41 GMT
- Title: Exploring gauge-fixing conditions with gradient-based optimization
- Authors: William Detmold, Gurtej Kanwar, Yin Lin, Phiala E. Shanahan, Michael L. Wagman,
- Abstract summary: This work introduces a differentiable parameterization of gauge fixing which is broad enough to cover Landau gauge, Coulomb gauge, and maximal tree gauges.
The adjoint state method allows gradient-based optimization to select gauge-fixing schemes that minimize an arbitrary target loss function.
- Score: 1.6186320401954881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lattice gauge fixing is required to compute gauge-variant quantities, for example those used in RI-MOM renormalization schemes or as objects of comparison for model calculations. Recently, gauge-variant quantities have also been found to be more amenable to signal-to-noise optimization using contour deformations. These applications motivate systematic parameterization and exploration of gauge-fixing schemes. This work introduces a differentiable parameterization of gauge fixing which is broad enough to cover Landau gauge, Coulomb gauge, and maximal tree gauges. The adjoint state method allows gradient-based optimization to select gauge-fixing schemes that minimize an arbitrary target loss function.
Related papers
- Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Conditional Korhunen-Lo\'{e}ve regression model with Basis Adaptation
for high-dimensional problems: uncertainty quantification and inverse
modeling [62.997667081978825]
We propose a methodology for improving the accuracy of surrogate models of the observable response of physical systems.
We apply the proposed methodology to constructing surrogate models via the Basis Adaptation (BA) method of the stationary hydraulic head response.
arXiv Detail & Related papers (2023-07-05T18:14:38Z) - Support Vector Machine for Determining Euler Angles in an Inertial
Navigation System [55.41644538483948]
The paper discusses the improvement of the accuracy of an inertial navigation system created on the basis of MEMS sensors using machine learning (ML) methods.
The proposed algorithm based on MO has demonstrated its ability to correctly classify in the presence of noise typical for MEMS sensors.
arXiv Detail & Related papers (2022-12-07T10:01:11Z) - Characterization of variational quantum algorithms using free fermions [0.0]
We numerically study the interplay between these symmetries and the locality of the target state.
We find that the number of iterations to converge to the solution scales linearly with system size.
arXiv Detail & Related papers (2022-06-13T18:11:16Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - Parameter Inference with Bifurcation Diagrams [1.0312968200748118]
We propose a gradient-based approach for inferring the parameters of differential equations that produce a user-specified bifurcation diagram.
The cost function contains a supervised error term that is minimal when the model bifurcations match the specified targets and an unsupervised bifurcation measure.
We demonstrate parameter inference with minimal models which explore the space of saddle-node and pitchfork diagrams and the genetic toggle switch from synthetic biology.
arXiv Detail & Related papers (2021-06-08T10:39:19Z) - Optimizing embedding-related quantum annealing parameters for reducing
hardware bias [0.0]
We show that parameter optimization can be done for an entire class of problems, given each instance uses a previously chosen fixed embedding.
Specifically, in the training phase, we fix an embedding E of a complete graph onto the hardware of the annealer, and then run an optimization algorithm to tune the following set of parameter values.
In the testing phase, we estimate how well the parameters computed during the training phase work on a random selection of other graphs from that class.
arXiv Detail & Related papers (2020-11-02T04:13:51Z) - Variable selection for Gaussian process regression through a sparse
projection [0.802904964931021]
This paper presents a new variable selection approach integrated with Gaussian process (GP) regression.
The choice of tuning parameters and the accuracy of the estimation are evaluated with the simulation some chosen benchmark approaches.
arXiv Detail & Related papers (2020-08-25T01:06:10Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.