An approximation of the $S$ matrix for solving the Marchenko equation
- URL: http://arxiv.org/abs/2410.20409v2
- Date: Tue, 29 Oct 2024 17:15:20 GMT
- Title: An approximation of the $S$ matrix for solving the Marchenko equation
- Authors: N. A. Khokhlov,
- Abstract summary: I present a new approximation of the $S$-matrix dependence on momentum $q$, formulated as a sum of a rational function and a truncated Sinc series.
This approach enables pointwise determination of the $S$ matrix with specified resolution, capturing essential features such as resonance behavior with high accuracy.
- Score: 0.0
- License:
- Abstract: I present a new approximation of the $S$-matrix dependence on momentum $q$, formulated as a sum of a rational function and a truncated Sinc series. This approach enables pointwise determination of the $S$ matrix with specified resolution, capturing essential features such as resonance behavior with high accuracy. The resulting approximation provides a separable kernel for the Marchenko equation (fixed-$l$ inversion), reducing it to a system of linear equations for the expansion coefficients of the output kernel. Numerical results demonstrate good convergence of this method, applicable to both unitary and non-unitary $S$ matrices. Convergence is further validated through comparisons with an exactly solvable square-well potential model. The method is applied to analyze $S_{31}$ $\pi N$ scattering data.
Related papers
- Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
We consider the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system.
Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate method.
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - Reconstructing $S$-matrix Phases with Machine Learning [49.1574468325115]
We apply modern machine learning techniques to studying the unitarity constraint.
We find a new phase-ambiguous solution which pushes the known limit on such solutions significantly beyond the previous bound.
arXiv Detail & Related papers (2023-08-18T10:29:26Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Energy-independent complex single $P$-waves $NN$ potential from
Marchenko equation [0.0]
We apply an isosceles triangular-pulse function set for the Marchenko equation input kernel expansion in a separable form.
We show that in the general case of a single partial wave, a linear expression of the input kernel is obtained.
We show that energy-independent complex partial potentials describe these data for single $P$-waves.
arXiv Detail & Related papers (2022-04-02T22:13:44Z) - Hybrid Model-based / Data-driven Graph Transform for Image Coding [54.31406300524195]
We present a hybrid model-based / data-driven approach to encode an intra-prediction residual block.
The first $K$ eigenvectors of a transform matrix are derived from a statistical model, e.g., the asymmetric discrete sine transform (ADST) for stability.
Using WebP as a baseline image, experimental results show that our hybrid graph transform achieved better energy compaction than default discrete cosine transform (DCT) and better stability than KLT.
arXiv Detail & Related papers (2022-03-02T15:36:44Z) - An algebraic form of the Marchenko inversion. Partial waves with orbital
momentum $l\ge 0$ [0.0]
We expand the Marchenko equation kernel in a separable form using a triangular wave set.
The linear expression is valid for any orbital angular momentum $l$.
The kernel allows one to find the potential function of the radial Schr"odinger equation with $h$-step accuracy.
arXiv Detail & Related papers (2021-12-29T00:48:13Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Analysis of One-Hidden-Layer Neural Networks via the Resolvent Method [0.0]
Motivated by random neural networks, we consider the random matrix $M = Y Yast$ with $Y = f(WX)$.
We prove that the Stieltjes transform of the limiting spectral distribution satisfies a quartic self-consistent equation up to some error terms.
In addition, we extend the previous results to the case of additive bias $Y=f(WX+B)$ with $B$ being an independent rank-one Gaussian random matrix.
arXiv Detail & Related papers (2021-05-11T15:17:39Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.