Approximation theory for Green's functions via the Lanczos algorithm
- URL: http://arxiv.org/abs/2505.00089v1
- Date: Wed, 30 Apr 2025 18:00:43 GMT
- Title: Approximation theory for Green's functions via the Lanczos algorithm
- Authors: Gabriele Pinna, Oliver Lunt, Curt von Keyserlingk,
- Abstract summary: It is known that Green's functions can be expressed as continued fractions.<n>We present a theory concerning errors in approximating Green's functions using continued fractions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is known that Green's functions can be expressed as continued fractions; the content at the $n$-th level of the fraction is encoded in a coefficient $b_n$, which can be recursively obtained using the Lanczos algorithm. We present a theory concerning errors in approximating Green's functions using continued fractions when only the first $N$ coefficients are known exactly. Our focus lies on the stitching approximation (also known as the recursion method), wherein truncated continued fractions are completed with a sequence of coefficients for which exact solutions are available. We assume a now standard conjecture about the growth of the Lanczos coefficients in chaotic many-body systems, and that the stitching approximation converges to the correct answer. Given these assumptions, we show that the rate of convergence of the stitching approximation to a Green's function depends strongly on the decay of staggered subleading terms in the Lanczos cofficients. Typically, the decay of the error term ranges from $1/\mathrm{poly}(N)$ in the best case to $1/\mathrm{poly}(\log N)$ in the worst case, depending on the differentiability of the spectral function at the origin. We present different variants of this error estimate for different asymptotic behaviours of the $b_n$, and we also conjecture a relationship between the asymptotic behavior of the $b_n$'s and the smoothness of the Green's function. Lastly, with the above assumptions, we prove a formula linking the spectral function's value at the origin to a product of continued fraction coefficients, which we then apply to estimate the diffusion constant in the mixed field Ising model.
Related papers
- Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation [22.652143194356864]
We address the problem of solving strongly convex and smooth problems using gradient descent (SGD) with a constant step size.<n>We provide an expansion of the mean-squared error of the resulting estimator with respect to the number of iterations $n$.<n>Our analysis relies on the properties of the SGDs viewed as a time-homogeneous Markov chain.
arXiv Detail & Related papers (2024-10-07T15:02:48Z) - Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications [2.0584253077707477]
We study the convergence properties of the Gradient Descent (SGD) method for finding a stationary point of an objective function $J(cdot)$.
Our results apply to a class of invex'' functions, which have the property that every stationary point is also a global minimizer.
arXiv Detail & Related papers (2023-12-05T15:22:39Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Optimal and instance-dependent guarantees for Markovian linear stochastic approximation [47.912511426974376]
We show a non-asymptotic bound of the order $t_mathrmmix tfracdn$ on the squared error of the last iterate of a standard scheme.
We derive corollaries of these results for policy evaluation with Markov noise.
arXiv Detail & Related papers (2021-12-23T18:47:50Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Approximation of BV functions by neural networks: A regularity theory
approach [0.0]
We are concerned with the approximation of functions by single hidden layer neural networks with ReLU activation functions on the unit circle.
We first study the convergence to equilibrium of the gradient flow associated with the cost function with a penalization.
As our penalization biases the weights to be bounded, this leads us to study how well a network with bounded weights can approximate a given function of bounded variation.
arXiv Detail & Related papers (2020-12-15T13:58:44Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z) - The Convergence Indicator: Improved and completely characterized
parameter bounds for actual convergence of Particle Swarm Optimization [68.8204255655161]
We introduce a new convergence indicator that can be used to calculate whether the particles will finally converge to a single point or diverge.
Using this convergence indicator we provide the actual bounds completely characterizing parameter regions that lead to a converging swarm.
arXiv Detail & Related papers (2020-06-06T19:08:05Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.