Constructive Approximation under Carleman's Condition, with Applications to Smoothed Analysis
- URL: http://arxiv.org/abs/2512.04371v1
- Date: Thu, 04 Dec 2025 01:40:05 GMT
- Title: Constructive Approximation under Carleman's Condition, with Applications to Smoothed Analysis
- Authors: Frederic Koehler, Beining Wu,
- Abstract summary: We develop a fairly tight analogue of the underlying DenjoyCarleman via complex analysis.<n>We establish $L2$ approximation-theoretic results for functions over general classes of distributions.<n>As another application, we show that the Paley--Wiener class of functions band to $[-,]$ admits superexponential rates of approximation over all strictly sub-exponential distributions.
- Score: 13.02728413691724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A classical result of Carleman, based on the theory of quasianalytic functions, shows that polynomials are dense in $L^2(μ)$ for any $μ$ such that the moments $\int x^k dμ$ do not grow too rapidly as $k \to \infty$. In this work, we develop a fairly tight quantitative analogue of the underlying Denjoy-Carleman theorem via complex analysis, and show that this allows for nonasymptotic control of the rate of approximation by polynomials for any smooth function with polynomial growth at infinity. In many cases, this allows us to establish $L^2$ approximation-theoretic results for functions over general classes of distributions (e.g., multivariate sub-Gaussian or sub-exponential distributions) which were previously known only in special cases. As one application, we show that the Paley--Wiener class of functions bandlimited to $[-Ω,Ω]$ admits superexponential rates of approximation over all strictly sub-exponential distributions, which leads to a new characterization of the class. As another application, we solve an open problem recently posed by Chandrasekaran, Klivans, Kontonis, Meka and Stavropoulos on the smoothed analysis of learning, and also obtain quantitative improvements to their main results and applications.
Related papers
- Sandwiching Polynomials for Geometric Concepts with Low Intrinsic Dimension [23.43080600040766]
We give a new method for constructing low-degree sandwichings that yield greatly improved degree bounds for several fundamental function classes and marginal distributions.<n>Our proof is relatively simple and directly uses the smoothness of the target function's boundary to construct sandwiching Lipschitz functions.
arXiv Detail & Related papers (2026-02-27T16:59:18Z) - Quantum Advantage via Solving Multivariate Polynomials [21.099298465042583]
We show that degree three functions are enough to instantiate the random oracle to obtain non-relativized quantum advantage.<n>We show that there is a expected-time quantum algorithm that provably solves $p_i(x_ldots,x_n)=y_i_iin [m]$ for $mn$ over $mathbbF$.
arXiv Detail & Related papers (2025-09-08T23:19:20Z) - On Uniform Weighted Deep Polynomial approximation [0.0]
We introduce and analyze a class of weighted deep approximants tailored for functions with asymmetric behavior-growing on one side and decaying on the other.<n>We show numerically that this framework outperforms Taylor, Chebyshev, and standard deep approximants, even when all use the same number of parameters.
arXiv Detail & Related papers (2025-06-26T14:25:32Z) - Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning [11.870656106069447]
We investigate the impact of compression on gradient algorithms for machine learning.<n>We highlight differences in terms of convergence rates between several unbiased compression operators.<n>We extend our results to the case of federated learning.
arXiv Detail & Related papers (2023-08-02T18:02:00Z) - Stochastic Submodular Maximization via Polynomial Estimators [13.498923494159312]
We focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution.
We show that for monotone functions of this form, greedy continuous algorithm attains an approximation ratio (in expectation) arbitrarily close to $(1-1/e) approx 63%$ using a estimation.
arXiv Detail & Related papers (2023-03-17T13:32:33Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z) - Better Theory for SGD in the Nonconvex World [2.6397379133308214]
Large-scale non optimization problems are ubiquitous in modern machine learning.
We perform experiments on the effects of a wide array of synthetic minibatch sizes on the Gradient Descent (SG) problem.
arXiv Detail & Related papers (2020-02-09T09:56:06Z) - A refinement of Reznick's Positivstellensatz with applications to
quantum information theory [72.8349503901712]
In Hilbert's 17th problem Artin showed that any positive definite in several variables can be written as the quotient of two sums of squares.
Reznick showed that the denominator in Artin's result can always be chosen as an $N$-th power of the squared norm of the variables.
arXiv Detail & Related papers (2019-09-04T11:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.