Numerical Derivatives, Projection Coefficients, and Truncation Errors in Analytic Hilbert Space With Gaussian Measure
- URL: http://arxiv.org/abs/2504.16246v2
- Date: Sat, 17 May 2025 18:56:38 GMT
- Title: Numerical Derivatives, Projection Coefficients, and Truncation Errors in Analytic Hilbert Space With Gaussian Measure
- Authors: M. W. AlMasri,
- Abstract summary: We introduce the projection coefficients algorithm, a novel method for determining the leading terms of the Taylor series expansion of a given holomorphic function from a graph perspective.<n>The accuracy of the computed derivative values depends on the precision and reliability of the numerical routines used to evaluate these inner products.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce the projection coefficients algorithm, a novel method for determining the leading terms of the Taylor series expansion of a given holomorphic function from a graph perspective, while also analyzing the associated truncation errors. Let $ f(z) $ be a holomorphic function, and let $\langle \cdot, \cdot \rangle$ denote the inner product defined over an analytic Hilbert space equipped with a Gaussian measure. The derivatives $ f^{(n)}(z) $ at a point $ z_0 $ can be computed theoretically by evaluating an inner product of the form $ f^{(n)}(z_0) = \frac{\langle z^n, f(z) \rangle}{C}, $ where $ C $ is a normalization constant. Specifically, in the Bargmann space (the analytic Hilbert space with a Gaussian weight and orthogonal monomials), this constant is $ \pi $. This result assumes that $ f(z) $ is a holomorphic function of a single complex variable. The accuracy of the computed derivative values depends on the precision and reliability of the numerical routines used to evaluate these inner products. The projection coefficients offer valuable insights into certain properties of analytic functions, such as whether they are odd or even, and whether the $ n $-th derivatives exist at a given point $ z_0 $. Due to its relevance to quantum theory, our approach establishes a correspondence between quantum circuits derived from quantum systems and the theory of analytic functions. This study lays the groundwork for further applications in numerical analysis and approximation theory within Hilbert spaces equipped with Gaussian measures. Additionally, it holds potential for advancing fields such as quantum computing, reproducing kernel Hilbert space (RKHS) methods -- which are widely used in support vector machines (SVM) and other areas of machine learning -- and probabilistic numerics.
Related papers
- $p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables [55.2480439325792]
A regression model is constructed that allows approximating continuous functions with any degree of accuracy.<n>The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
arXiv Detail & Related papers (2025-03-30T15:42:08Z) - Second quantization for classical nonlinear dynamics [0.0]
We propose a framework for representing the evolution of observables of measure-preserving ergodic flows through infinite-dimensional rotation systems on tori.<n>We show that their Banach algebra spectra, $sigma(F_w(mathcal H_tau)$, decompose into a family of tori of potentially infinite dimension.<n>Our scheme also employs a procedure for representing observables of the original system by reproducing functions on finite-dimensional tori in $sigma(F_w(mathcal H_tau)$ of arbitrarily large degree.
arXiv Detail & Related papers (2025-01-13T15:36:53Z) - Upper Bounds for Learning in Reproducing Kernel Hilbert Spaces for Non IID Samples [1.1510009152620668]
We study a Markov chain-based gradient algorithm in general Hilbert spaces, aiming to approximate the optimal solution of a quadratic loss function.<n>We extend these results to an online regularized learning algorithm in reproducing kernel Hilbert spaces.
arXiv Detail & Related papers (2024-10-10T20:34:22Z) - Gaussian kernel expansion with basis functions uniformly bounded in $\mathcal{L}_{\infty}$ [0.6138671548064355]
Kernel expansions are a topic of considerable interest in machine learning.
Recent work in the literature has derived some of these results by assuming uniformly bounded basis functions in $mathcalL_infty$.
Our main result is the construction on $mathbbR2$ of a Gaussian kernel expansion with weights in $ell_p$ for any $p>1$.
arXiv Detail & Related papers (2024-10-02T10:10:30Z) - Tensor network approximation of Koopman operators [0.0]
We propose a framework for approximating the evolution of observables of measure-preserving ergodic systems.
Our approach is based on a spectrally-convergent approximation of the skew-adjoint Koopman generator.
A key feature of this quantum-inspired approximation is that it captures information from a tensor product space of dimension $(2d+1)n$.
arXiv Detail & Related papers (2024-07-09T21:40:14Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - One-dimensional pseudoharmonic oscillator: classical remarks and
quantum-information theory [0.0]
Motion of a potential that is a combination of positive quadratic and inverse quadratic functions of the position is considered.
The dependence on the particle energy and the factor $mathfraka$ describing a relative strength of its constituents is described.
arXiv Detail & Related papers (2023-04-13T11:50:51Z) - Theory of free fermions under random projective measurements [43.04146484262759]
We develop an analytical approach to the study of one-dimensional free fermions subject to random projective measurements of local site occupation numbers.
We derive a non-linear sigma model (NLSM) as an effective field theory of the problem.
arXiv Detail & Related papers (2023-04-06T15:19:33Z) - Efficient displacement convex optimization with particle gradient
descent [57.88860627977882]
Particle gradient descent is widely used to optimize functions of probability measures.
This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are emphdisplacement convex in measures.
arXiv Detail & Related papers (2023-02-09T16:35:59Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Stochastic behavior of outcome of Schur-Weyl duality measurement [45.41082277680607]
We focus on the measurement defined by the decomposition based on Schur-Weyl duality on $n$ qubits.
We derive various types of distribution including a kind of central limit when $n$ goes to infinity.
arXiv Detail & Related papers (2021-04-26T15:03:08Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Nonparametric approximation of conditional expectation operators [0.3655021726150368]
We investigate the approximation of the $L2$-operator defined by $[Pf](x) := mathbbE[ f(Y) mid X = x ]$ under minimal assumptions.
We prove that $P$ can be arbitrarily well approximated in operator norm by Hilbert-Schmidt operators acting on a reproducing kernel space.
arXiv Detail & Related papers (2020-12-23T19:06:12Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Spectral density estimation with the Gaussian Integral Transform [91.3755431537592]
spectral density operator $hatrho(omega)=delta(omega-hatH)$ plays a central role in linear response theory.
We describe a near optimal quantum algorithm providing an approximation to the spectral density.
arXiv Detail & Related papers (2020-04-10T03:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.