On the Convergence of Irregular Sampling in Reproducing Kernel Hilbert Spaces
- URL: http://arxiv.org/abs/2504.13623v1
- Date: Fri, 18 Apr 2025 10:57:16 GMT
- Title: On the Convergence of Irregular Sampling in Reproducing Kernel Hilbert Spaces
- Authors: Armin Iske,
- Abstract summary: We discuss approximation properties of kernel regression under minimalistic assumptions on both the kernel and the input data.<n>We first prove error estimates in the kernel's RKHS norm.<n>This leads to new results concerning uniform convergence of kernel regression on compact domains.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyse the convergence of sampling algorithms for functions in reproducing kernel Hilbert spaces (RKHS). To this end, we discuss approximation properties of kernel regression under minimalistic assumptions on both the kernel and the input data. We first prove error estimates in the kernel's RKHS norm. This leads us to new results concerning uniform convergence of kernel regression on compact domains. For Lipschitz continuous and H\"older continuous kernels, we prove convergence rates.
Related papers
- Mirror Descent on Reproducing Kernel Banach Spaces [12.716091600034543]
This paper addresses a learning problem on Banach spaces endowed with a reproducing kernel.
We propose an algorithm that employs gradient steps in the dual space of the Banach space using the reproducing kernel.
To instantiate this algorithm in practice, we introduce a novel family of RKBSs with $p$-norm.
arXiv Detail & Related papers (2024-11-18T02:18:32Z) - Learning Analysis of Kernel Ridgeless Regression with Asymmetric Kernel Learning [33.34053480377887]
This paper enhances kernel ridgeless regression with Locally-Adaptive-Bandwidths (LAB) RBF kernels.
For the first time, we demonstrate that functions learned from LAB RBF kernels belong to an integral space of Reproducible Kernel Hilbert Spaces (RKHSs)
arXiv Detail & Related papers (2024-06-03T15:28:12Z) - Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum [6.749750044497731]
We prove the phenomena of tempered overfitting and catastrophic overfitting under the sub-Gaussian design assumption.
We also identify that the independence of the features plays an important role in guaranteeing tempered overfitting.
arXiv Detail & Related papers (2024-02-02T10:36:53Z) - Kernel Ridge Regression Inference [7.066496204344619]
We provide uniform inference and confidence bands for kernel ridge regression.
We construct sharp, uniform confidence sets for KRR, which shrink at nearly the minimax rate, for general regressors.
We use our procedure to construct a novel test for match effects in school assignment.
arXiv Detail & Related papers (2023-02-13T18:26:36Z) - Recursive Estimation of Conditional Kernel Mean Embeddings [0.0]
Kernel mean embeddings map probability distributions to elements of a kernel reproducing Hilbert space (RKHS)
We present a new algorithm to estimate the conditional kernel mean map in a Hilbert space valued $L$ space, that is in a Bochner space.
arXiv Detail & Related papers (2023-02-12T16:55:58Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Uniform Function Estimators in Reproducing Kernel Hilbert Spaces [0.0]
This paper addresses the problem of regression to reconstruct functions, which are observed with superimposed errors at random locations.
It is demonstrated that the estimator, which is often derived by employing Gaussian random fields, converges in the mean norm of the kernel reproducing Hilbert space to the conditional expectation.
arXiv Detail & Related papers (2021-08-16T08:13:28Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Faster Convergence of Stochastic Gradient Langevin Dynamics for
Non-Log-Concave Sampling [110.88857917726276]
We provide a new convergence analysis of gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave.
At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain.
arXiv Detail & Related papers (2020-10-19T15:23:18Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z) - Metrizing Weak Convergence with Maximum Mean Discrepancies [88.54422104669078]
This paper characterizes the maximum mean discrepancies (MMD) that metrize the weak convergence of probability measures for a wide class of kernels.
We prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, metrizes the weak convergence of probability measures if and only if k is continuous.
arXiv Detail & Related papers (2020-06-16T15:49:33Z) - The Convergence Indicator: Improved and completely characterized
parameter bounds for actual convergence of Particle Swarm Optimization [68.8204255655161]
We introduce a new convergence indicator that can be used to calculate whether the particles will finally converge to a single point or diverge.
Using this convergence indicator we provide the actual bounds completely characterizing parameter regions that lead to a converging swarm.
arXiv Detail & Related papers (2020-06-06T19:08:05Z) - RFN: A Random-Feature Based Newton Method for Empirical Risk
Minimization in Reproducing Kernel Hilbert Spaces [14.924672048447334]
Large-scale finite-sum problems can be solved using efficient variants of Newton method, where the Hessian is approximated via sub-samples of data.
In this paper, we observe that for this class of problems, one can naturally use kernel approximation to speed up the Newton method.
We provide a novel second-order algorithm that enjoys local superlinear convergence and global linear convergence.
arXiv Detail & Related papers (2020-02-12T01:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.