Learning S-Matrix Phases with Neural Operators
- URL: http://arxiv.org/abs/2404.14551v2
- Date: Fri, 30 Aug 2024 18:14:16 GMT
- Title: Learning S-Matrix Phases with Neural Operators
- Authors: V. Niarchos, C. Papageorgakis,
- Abstract summary: We study the relation between the modulus and phase of amplitudes in $2to 2$ elastic scattering at fixed energies.
We do not employ the integral relation imposed by unitarity, but instead train FNOs to discover it from many samples of amplitudes with finite partial wave expansions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use Fourier Neural Operators (FNOs) to study the relation between the modulus and phase of amplitudes in $2\to 2$ elastic scattering at fixed energies. Unlike previous approaches, we do not employ the integral relation imposed by unitarity, but instead train FNOs to discover it from many samples of amplitudes with finite partial wave expansions. When trained only on true samples, the FNO correctly predicts (unique or ambiguous) phases of amplitudes with infinite partial wave expansions. When also trained on false samples, it can rate the quality of its prediction by producing a true/false classifying index. We observe that the value of this index is strongly correlated with the violation of the unitarity constraint for the predicted phase, and present examples where it delineates the boundary between allowed and disallowed profiles of the modulus. Our application of FNOs is unconventional: it involves a simultaneous regression-classification task and emphasizes the role of statistics in ensembles of NOs. We comment on the merits and limitations of the approach and its potential as a new methodology in Theoretical Physics.
Related papers
- Stochastic Approximation with Unbounded Markovian Noise: A General-Purpose Theorem [7.443139252028032]
We consider average-reward Reinforcement Learning with unbounded state space and reward function.
Recent works studied this problem in the actor-critic framework.
We study Temporal Difference (TD) learning with linear function approximation.
arXiv Detail & Related papers (2024-10-29T03:40:53Z) - Unusual Diffusivity in Strongly Disordered Quantum Lattices: Random Dimer Model [0.0]
We study highly disordered quantum lattices using superconducting qubits.
Our predictions challenge conventional understanding of incoherent hopping under strong disorder.
This offers new insights to optimize disordered systems for optoelectrical and quantum information technologies.
arXiv Detail & Related papers (2024-05-31T14:09:48Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic
Analysis For DDIM-Type Samplers [90.45898746733397]
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
We show that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current gradient.
arXiv Detail & Related papers (2023-03-06T18:59:19Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Stability and convergence of dynamical decoupling with finite amplitude
control [0.0]
We look at decoupling operations of finite amplitude.
We show how the bangbang description arises as a limit, hence justifying it as a reasonable approximation.
arXiv Detail & Related papers (2022-05-02T15:35:20Z) - Frequency estimation under non-Markovian spatially correlated quantum
noise: Restoring superclassical precision scaling [0.0]
We study the Ramsey estimation precision attainable by entanglement-enhanced interferometry in the presence of correlatedly non-classical noise.
In a paradigmatic case of spin-boson dephasovian noise from a thermal environment, we find that it is possible to suppress, on average, the effect of correlations by randomizing the location of probes.
arXiv Detail & Related papers (2022-04-22T16:25:16Z) - A Dynamical Central Limit Theorem for Shallow Neural Networks [48.66103132697071]
We prove that the fluctuations around the mean limit remain bounded in mean square throughout training.
If the mean-field dynamics converges to a measure that interpolates the training data, we prove that the deviation eventually vanishes in the CLT scaling.
arXiv Detail & Related papers (2020-08-21T18:00:50Z) - Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
Regime [32.65347128465841]
Deep neural networks can achieve remarkable performances while interpolating the training data perfectly.
Rather than the U-curve of the bias-variance trade-off, their test error often follows a "double descent"
We develop a quantitative theory for this phenomenon in the so-called lazy learning regime of neural networks.
arXiv Detail & Related papers (2020-03-02T17:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.