Learning of Statistical Field Theories
- URL: http://arxiv.org/abs/2511.09859v1
- Date: Fri, 14 Nov 2025 01:13:21 GMT
- Title: Learning of Statistical Field Theories
- Authors: Shreya Shukla, Abhijith Jayakumar, Andrey Y. Lokhov,
- Abstract summary: We propose an approach for the inverse problem that uniformly accommodates systems with discrete, continuous, and hybrid variables.<n>We show how iterating the procedure under coarse-graining reconstructs full non-perturbative renormalization-group flows.<n>We also address a realistic setting where full gauge configurations may be unavailable, and reformulate learning algorithms for multiple field theories.
- Score: 3.160641766712591
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recovering microscopic couplings directly from data provides a route to solving the inverse problem in statistical field theories, one that complements the traditional-often computationally intractable-forward approach of predicting observables from an action or Hamiltonian. Here, we propose an approach for the inverse problem that uniformly accommodates systems with discrete, continuous, and hybrid variables. We demonstrate accurate parameter recovery in several benchmark systems-including Wegner's Ising gauge theory, $φ^4$ theory, Schwinger and Sine-Gordon models, and mixed spin-gauge systems, and show how iterating the procedure under coarse-graining reconstructs full non-perturbative renormalization-group flows. This gives direct access to phase boundaries, fixed points, and emergent interactions without relying on perturbation theory. We also address a realistic setting where full gauge configurations may be unavailable, and reformulate learning algorithms for multiple field theories so that they are recovered directly using observables such as correlations from scattering data or quantum simulators. We anticipate that our methodology will find widespread use in practical learning of field theories in strongly coupled regimes where analytical tools might fail.
Related papers
- Riemannian AmbientFlow: Towards Simultaneous Manifold Learning and Generative Modeling from Corrupted Data [4.681760167323748]
We introduce a framework for learning a probabilistic generative model and the underlying, nonlinear data manifold directly from corrupted observations.<n>We establish theoretical guarantees showing that, under appropriate geometric regularization and measurement conditions, the learned model recovers the underlying data distribution up to a controllable error and yields a smooth, bi-Lipschitz manifold parametrization.
arXiv Detail & Related papers (2026-01-26T17:51:52Z) - Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting [0.0]
We propose a hybrid architecture that injects a geostatistic inductive bias directly into the decomposing self-attention mechanism via a learnable costatistics kernel.<n>We demonstrate the phenomenon of Deep Variography'', where the network successfully recovers the true spatial parameters of the underlying process end-to-end via backpropagation.
arXiv Detail & Related papers (2025-12-19T15:32:24Z) - VIKING: Deep variational inference with stochastic projections [48.946143517489496]
Variational mean field approximations tend to struggle with contemporary overparametrized deep neural networks.<n>We propose a simple variational family that considers two independent linear subspaces of the parameter space.<n>This allows us to build a fully-correlated approximate posterior reflecting the overparametrization.
arXiv Detail & Related papers (2025-10-27T15:38:35Z) - Minimum-Excess-Work Guidance [17.15668604906196]
We propose a regularization framework for guiding pre-trained probability flow generative models.<n>Our approach enables efficient guidance in sparse-data regimes common to scientific applications.<n>We demonstrate the framework's versatility on a coarse-grained protein model.
arXiv Detail & Related papers (2025-05-19T17:19:43Z) - Overcoming Dimensional Factorization Limits in Discrete Diffusion Models through Quantum Joint Distribution Learning [79.65014491424151]
We propose a quantum Discrete Denoising Diffusion Probabilistic Model (QD3PM)<n>It enables joint probability learning through diffusion and denoising in exponentially large Hilbert spaces.<n>This paper establishes a new theoretical paradigm in generative models by leveraging the quantum advantage in joint distribution learning.
arXiv Detail & Related papers (2025-05-08T11:48:21Z) - Applications of flow models to the generation of correlated lattice QCD ensembles [69.18453821764075]
Machine-learned normalizing flows can be used in the context of lattice quantum field theory to generate statistically correlated ensembles of lattice gauge fields at different action parameters.
This work demonstrates how these correlations can be exploited for variance reduction in the computation of observables.
arXiv Detail & Related papers (2024-01-19T18:33:52Z) - Understanding Pathologies of Deep Heteroskedastic Regression [25.509884677111344]
Heteroskedastic models predict both mean and residual noise for each data point.
At one extreme, these models fit all training data perfectly, eliminating residual noise entirely.
At the other, they overfit the residual noise while predicting a constant, uninformative mean.
We observe a lack of middle ground, suggesting a phase transition dependent on model regularization strength.
arXiv Detail & Related papers (2023-06-29T06:31:27Z) - End-To-End Latent Variational Diffusion Models for Inverse Problems in
High Energy Physics [61.44793171735013]
We introduce a novel unified architecture, termed latent variation models, which combines the latent learning of cutting-edge generative art approaches with an end-to-end variational framework.
Our unified approach achieves a distribution-free distance to the truth of over 20 times less than non-latent state-of-the-art baseline.
arXiv Detail & Related papers (2023-05-17T17:43:10Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Machine Learning and Variational Algorithms for Lattice Field Theory [1.198562319289569]
In lattice quantum field theory studies, parameters defining the lattice theory must be tuned toward criticality to access continuum physics.
We introduce an approach to "deform" Monte Carlo estimators based on contour deformations applied to the domain of the path integral.
We demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications.
arXiv Detail & Related papers (2021-06-03T16:37:05Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.