Reconstructing $S$-matrix Phases with Machine Learning
- URL: http://arxiv.org/abs/2308.09451v1
- Date: Fri, 18 Aug 2023 10:29:26 GMT
- Title: Reconstructing $S$-matrix Phases with Machine Learning
- Authors: Aur\'elien Dersy, Matthew D. Schwartz, Alexander Zhiboedov
- Abstract summary: We apply modern machine learning techniques to studying the unitarity constraint.
We find a new phase-ambiguous solution which pushes the known limit on such solutions significantly beyond the previous bound.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important element of the $S$-matrix bootstrap program is the relationship
between the modulus of an $S$-matrix element and its phase. Unitarity relates
them by an integral equation. Even in the simplest case of elastic scattering,
this integral equation cannot be solved analytically and numerical approaches
are required. We apply modern machine learning techniques to studying the
unitarity constraint. We find that for a given modulus, when a phase exists it
can generally be reconstructed to good accuracy with machine learning.
Moreover, the loss of the reconstruction algorithm provides a good proxy for
whether a given modulus can be consistent with unitarity at all. In addition,
we study the question of whether multiple phases can be consistent with a
single modulus, finding novel phase-ambiguous solutions. In particular, we find
a new phase-ambiguous solution which pushes the known limit on such solutions
significantly beyond the previous bound.
Related papers
- An approximation of the $S$ matrix for solving the Marchenko equation [0.0]
I present a new approximation of the $S$-matrix dependence on momentum $q$, formulated as a sum of a rational function and a truncated Sinc series.
This approach enables pointwise determination of the $S$ matrix with specified resolution, capturing essential features such as resonance behavior with high accuracy.
arXiv Detail & Related papers (2024-10-27T11:06:28Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - Learning High-Dimensional Nonparametric Differential Equations via
Multivariate Occupation Kernel Functions [0.31317409221921133]
Learning a nonparametric system of ordinary differential equations requires learning $d$ functions of $d$ variables.
Explicit formulations scale quadratically in $d$ unless additional knowledge about system properties, such as sparsity and symmetries, is available.
We propose a linear approach to learning using the implicit formulation provided by vector-valued Reproducing Kernel Hilbert Spaces.
arXiv Detail & Related papers (2023-06-16T21:49:36Z) - Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression [20.00109111254507]
We show that the problem suffers from a $frackSNR2$-to-$frack2SNR2$ statistical-to-computational gap.
We also analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem.
arXiv Detail & Related papers (2023-03-03T18:03:49Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Resolving mean-field solutions of dissipative phase transitions using
permutational symmetry [0.0]
Phase transitions in dissipative quantum systems have been investigated using various analytical approaches, particularly in the mean-field (MF) limit.
These two solutions cannot be reconciled because the MF solutions above $d_c$ should be identical.
numerical studies on large systems may not be feasible because of the exponential increase in computational complexity.
arXiv Detail & Related papers (2021-10-18T16:07:09Z) - Tightening the Dependence on Horizon in the Sample Complexity of
Q-Learning [59.71676469100807]
This work sharpens the sample complexity of synchronous Q-learning to an order of $frac|mathcalS|| (1-gamma)4varepsilon2$ for any $0varepsilon 1$.
Our finding unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage.
arXiv Detail & Related papers (2021-02-12T14:22:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.