A machine learning pipeline for autonomous numerical analytic
continuation of Dyson-Schwinger equations
- URL: http://arxiv.org/abs/2112.13011v1
- Date: Fri, 24 Dec 2021 09:56:42 GMT
- Title: A machine learning pipeline for autonomous numerical analytic
continuation of Dyson-Schwinger equations
- Authors: Andreas Windisch, Thomas Gallien, Christopher Schwarzlmueller
- Abstract summary: Dyson-Schwinger equations (DSEs) are a non-perturbative way to express n-point functions in quantum field theory.
One has to deform the integration contour of the radial component in the complex plane of the loop momentum expressed in hyper-spherical coordinates.
Since the nature of Dyson-Schwinger equations is such, that they have to be solved in a self-consistent way, one cannot analyze the analytic properties of the integrand after every step.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dyson-Schwinger equations (DSEs) are a non-perturbative way to express
n-point functions in quantum field theory. Working in Euclidean space and in
Landau gauge, for example, one can study the quark propagator Dyson-Schwinger
equation in the real and complex domain, given that a suitable and tractable
truncation has been found. When aiming for solving these equations in the
complex domain, that is, for complex external momenta, one has to deform the
integration contour of the radial component in the complex plane of the loop
momentum expressed in hyper-spherical coordinates. This has to be done in order
to avoid poles and branch cuts in the integrand of the self-energy loop. Since
the nature of Dyson-Schwinger equations is such, that they have to be solved in
a self-consistent way, one cannot analyze the analytic properties of the
integrand after every iteration step, as this would not be feasible. In these
proceedings, we suggest a machine learning pipeline based on deep learning (DL)
approaches to computer vision (CV), as well as deep reinforcement learning
(DRL), that could solve this problem autonomously by detecting poles and branch
cuts in the numerical integrand after every iteration step and by suggesting
suitable integration contour deformations that avoid these obstructions. We
sketch out a proof of principle for both of these tasks, that is, the pole and
branch cut detection, as well as the contour deformation.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Exact dynamics of quantum dissipative $XX$ models: Wannier-Stark localization in the fragmented operator space [49.1574468325115]
We find an exceptional point at a critical dissipation strength that separates oscillating and non-oscillating decay.
We also describe a different type of dissipation that leads to a single decay mode in the whole operator subspace.
arXiv Detail & Related papers (2024-05-27T16:11:39Z) - Weak Collocation Regression for Inferring Stochastic Dynamics with
L\'{e}vy Noise [8.15076267771005]
We propose a weak form of the Fokker-Planck (FP) equation for extracting dynamics with L'evy noise.
Our approach can simultaneously distinguish mixed noise types, even in multi-dimensional problems.
arXiv Detail & Related papers (2024-03-13T06:54:38Z) - Transolver: A Fast Transformer Solver for PDEs on General Geometries [66.82060415622871]
We present Transolver, which learns intrinsic physical states hidden behind discretized geometries.
By calculating attention to physics-aware tokens encoded from slices, Transovler can effectively capture intricate physical correlations.
Transolver achieves consistent state-of-the-art with 22% relative gain across six standard benchmarks and also excels in large-scale industrial simulations.
arXiv Detail & Related papers (2024-02-04T06:37:38Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Gaussian Process regression over discrete probability measures: on the
non-stationarity relation between Euclidean and Wasserstein Squared
Exponential Kernels [0.19116784879310028]
A non-stationarity relationship between the Wasserstein-based squared exponential kernel and its Euclidean-based counterpart is studied.
A transformation is used to transform the input space as Euclidean into a non-stationary and Wasserstein-based Gaussian Process model.
arXiv Detail & Related papers (2022-12-02T17:09:52Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Exact imposition of boundary conditions with distance functions in
physics-informed deep neural networks [0.5804039129951741]
We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations.
To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $phi$ multiplied by the PINN approximation.
We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries.
arXiv Detail & Related papers (2021-04-17T03:02:52Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Deep reinforcement learning for complex evaluation of one-loop diagrams
in quantum field theory [0.0]
We present a technique that allows for numerical analytic continuation of integrals encountered in one-loop diagrams in quantum field theory.
We train a reinforcement learning agent to perform the required contour deformations.
Our study shows great promise for an agent to be deployed in iterative numerical approaches used to compute non-perturbative 2-point functions.
arXiv Detail & Related papers (2019-12-27T19:45:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.