The GINN framework: a stochastic QED correspondence for stability and chaos in deep neural networks
- URL: http://arxiv.org/abs/2508.18948v1
- Date: Tue, 26 Aug 2025 11:41:11 GMT
- Title: The GINN framework: a stochastic QED correspondence for stability and chaos in deep neural networks
- Authors: Rodrigo Carmo Terin,
- Abstract summary: We develop a Euclidean field-theoretic approach that maps deep neural networks (DNNs) to quantum electrodynamics (QED)<n> Neural activations and weights are represented by fermionic matter and gauge fields.<n>We validate the theoretical predictions through numerical simulations of standard multilayer perceptrons.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of a Euclidean stochastic field-theoretic approach that maps deep neural networks (DNNs) to quantum electrodynamics (QED) with local U(1) symmetry is presented. Neural activations and weights are represented by fermionic matter and gauge fields, with a fictitious Langevin time enabling covariant gauge fixing. This mapping identifies the gauge parameter with kernel design choices in wide DNNs, relating stability thresholds to gauge-dependent amplification factors. Finite-width fluctuations correspond to loop corrections in QED. As a proof of concept, we validate the theoretical predictions through numerical simulations of standard multilayer perceptrons and, in parallel, propose a gauge-invariant neural network (GINN) implementation using magnitude--phase parameterization of weights. Finally, a double-copy replica approach is shown to unify the computation of the largest Lyapunov exponent in stochastic QED and wide DNNs.
Related papers
- Graph neural network force fields for adiabatic dynamics of lattice Hamiltonians [0.0]
We develop a graph neural network (GNN)-based force-field framework for the adiabatic dynamics of lattice Hamiltonians.<n>Trained on exact-diagonalization data, the GNN achieves high force accuracy, strict linear scaling with system size, and directability to large lattices.<n>These results establish GNNs as an elegant and efficient architecture for symmetry-aware, large-scale dynamical simulations of correlated lattice systems.
arXiv Detail & Related papers (2026-03-02T16:23:25Z) - Performance Guarantees for Quantum Neural Estimation of Entropies [31.955071410400947]
Quantum neural estimators (QNEs) combine classical neural networks with parametrized quantum circuits.<n>We study formal guarantees for QNEs of measured relative entropies in the form of non-asymptotic error risk bounds.<n>Our theory aims to facilitate principled implementation of QNEs for measured relative entropies.
arXiv Detail & Related papers (2025-11-24T16:36:06Z) - Schrodinger Neural Network and Uncertainty Quantification: Quantum Machine [0.0]
We introduce the Schrodinger Neural Network (SNN), a principled architecture for conditional density estimation and uncertainty.<n>The SNN maps each input to a normalized wave function on the output domain and computes predictive probabilities via the Born rule.
arXiv Detail & Related papers (2025-10-27T15:52:47Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Deep Quantum Neural Networks are Gaussian Process [0.0]
We present a framework to examine the impact of finite width in the closed-form relationship using a $ 1/d$ expansion.
We elucidate the relationship between GP and its parameter space equivalent, characterized by the Quantum Neural Tangent Kernels (QNTK)
arXiv Detail & Related papers (2023-05-22T03:07:43Z) - Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression [37.69303106863453]
The Information Bottleneck (IB) principle offers an information-theoretic framework for analyzing the training process of deep neural networks (DNNs)
In this paper, we introduce a framework for conducting IB analysis of general NNs.
We also perform IB analysis on a close-to-real-scale, which reveals new features of the MI dynamics.
arXiv Detail & Related papers (2023-05-13T21:44:32Z) - Message-Passing Neural Quantum States for the Homogeneous Electron Gas [41.94295877935867]
We introduce a message-passing-neural-network-based wave function Ansatz to simulate extended, strongly interacting fermions in continuous space.
We demonstrate its accuracy by simulating the ground state of the homogeneous electron gas in three spatial dimensions.
arXiv Detail & Related papers (2023-05-12T04:12:04Z) - Analyzing Convergence in Quantum Neural Networks: Deviations from Neural
Tangent Kernels [20.53302002578558]
A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers.
Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood.
arXiv Detail & Related papers (2023-03-26T22:58:06Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Interrelation of equivariant Gaussian processes and convolutional neural
networks [77.34726150561087]
Currently there exists rather promising new trend in machine leaning (ML) based on the relationship between neural networks (NN) and Gaussian processes (GP)
In this work we establish a relationship between the many-channel limit for CNNs equivariant with respect to two-dimensional Euclidean group with vector-valued neuron activations and the corresponding independently introduced equivariant Gaussian processes (GP)
arXiv Detail & Related papers (2022-09-17T17:02:35Z) - Symmetric Pruning in Quantum Neural Networks [111.438286016951]
Quantum neural networks (QNNs) exert the power of modern quantum machines.
QNNs with handcraft symmetric ansatzes generally experience better trainability than those with asymmetric ansatzes.
We propose the effective quantum neural tangent kernel (EQNTK) to quantify the convergence of QNNs towards the global optima.
arXiv Detail & Related papers (2022-08-30T08:17:55Z) - REMuS-GNN: A Rotation-Equivariant Model for Simulating Continuum
Dynamics [0.0]
We introduce REMuS-GNN, a rotation-equivariant multi-scale model for simulating continuum dynamical systems.
We demonstrate and evaluate this method on the incompressible flow around elliptical cylinders.
arXiv Detail & Related papers (2022-05-05T16:20:37Z) - The edge of chaos: quantum field theory and deep neural networks [0.0]
We explicitly construct the quantum field theory corresponding to a general class of deep neural networks.
We compute the loop corrections to the correlation function in a perturbative expansion in the ratio of depth $T$ to width $N$.
Our analysis provides a first-principles approach to the rapidly emerging NN-QFT correspondence, and opens several interesting avenues to the study of criticality in deep neural networks.
arXiv Detail & Related papers (2021-09-27T18:00:00Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Efficient Uncertainty Quantification for Dynamic Subsurface Flow with
Surrogate by Theory-guided Neural Network [0.0]
We propose a methodology for efficient uncertainty quantification for dynamic subsurface flow with a surrogate constructed by the Theory-guided Neural Network (TgNN)
parameters, time and location comprise the input of the neural network, while the quantity of interest is the output.
The trained neural network can predict solutions of subsurface flow problems with new parameters.
arXiv Detail & Related papers (2020-04-25T12:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.