Spectral bounds of the $\varepsilon$-entropy of kernel classes
- URL: http://arxiv.org/abs/2204.04512v1
- Date: Sat, 9 Apr 2022 16:45:22 GMT
- Title: Spectral bounds of the $\varepsilon$-entropy of kernel classes
- Authors: Rustem Takhanov
- Abstract summary: We develop new bounds on the $varepsilon$-entropy of a unit ball in a reproducing kernel space induced by some Mercer kernel $K$.
In our approach we exploit an ellipsoidal structure of a unit ball in RKHS and a previous work on covering numbers of an ellipsoid in the euclidean space.
- Score: 6.028247638616059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop new upper and lower bounds on the $\varepsilon$-entropy of a unit
ball in a reproducing kernel Hilbert space induced by some Mercer kernel $K$.
Our bounds are based on the behaviour of eigenvalues of a corresponding
integral operator. In our approach we exploit an ellipsoidal structure of a
unit ball in RKHS and a previous work on covering numbers of an ellipsoid in
the euclidean space obtained by Dumer, Pinsker and Prelov.
We present a number of applications of our main bound, such as its tightness
for a practically important case of the Gaussian kernel. Further, we develop a
series of lower bounds on the $\varepsilon$-entropy that can be established
from a connection between covering numbers of a ball in RKHS and a quantization
of a Gaussian Random Field that corresponds to the kernel $K$ by the
Kosambi-Karhunen-Lo\`eve transform.
Related papers
- Gaussian kernel expansion with basis functions uniformly bounded in $\mathcal{L}_{\infty}$ [0.6138671548064355]
Kernel expansions are a topic of considerable interest in machine learning.
Recent work in the literature has derived some of these results by assuming uniformly bounded basis functions in $mathcalL_infty$.
Our main result is the construction on $mathbbR2$ of a Gaussian kernel expansion with weights in $ell_p$ for any $p>1$.
arXiv Detail & Related papers (2024-10-02T10:10:30Z) - KPZ scaling from the Krylov space [83.88591755871734]
Recently, a superdiffusion exhibiting the Kardar-Parisi-Zhang scaling in late-time correlators and autocorrelators has been reported.
Inspired by these results, we explore the KPZ scaling in correlation functions using their realization in the Krylov operator basis.
arXiv Detail & Related papers (2024-06-04T20:57:59Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Kernel $\epsilon$-Greedy for Contextual Bandits [4.1347433277076036]
We consider a kernelized version of the $epsilon$-greedy strategy for contextual bandits.
We propose an online weighted kernel ridge regression estimator for the reward functions.
arXiv Detail & Related papers (2023-06-29T22:48:34Z) - Gaussian random field approximation via Stein's method with applications to wide random neural networks [20.554836643156726]
We develop a novel Gaussian smoothing technique that allows us to transfer a bound in a smoother metric to the $W_$ distance.
We obtain the first bounds on the Gaussian random field approximation of wide random neural networks.
Our bounds are explicitly expressed in terms of the widths of the network and moments of the random weights.
arXiv Detail & Related papers (2023-06-28T15:35:10Z) - A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points [50.90125395570797]
This nearly establishes a conjecture ofciteSaundersonCPW12, within logarithmic factors.
The latter conjecture has attracted significant attention over the past decade, due to its connections to machine learning and sum-of-squares lower bounds for certain statistical problems.
arXiv Detail & Related papers (2022-12-21T17:48:01Z) - Sobolev Spaces, Kernels and Discrepancies over Hyperspheres [4.521119623956821]
This work provides theoretical foundations for kernel methods in the hyperspherical context.
We characterise the native spaces (reproducing kernel Hilbert spaces) and the Sobolev spaces associated with kernels defined over hyperspheres.
Our results have direct consequences for kernel cubature, determining the rate of convergence of the worst case error, and expanding the applicability of cubature algorithms.
arXiv Detail & Related papers (2022-11-16T20:31:38Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS [10.578438886506076]
We prove that the exponential power kernel with a smaller power (making the kernel less smooth) leads to a larger RKHS.
We also prove that the reproducing kernel Hilbert spaces (RKHS) of a deep neural tangent kernel and the Laplace kernel include the same set of functions.
arXiv Detail & Related papers (2020-09-22T16:58:26Z) - Kernel-Based Reinforcement Learning: A Finite-Time Analysis [53.47210316424326]
We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards.
We empirically validate our approach in continuous MDPs with sparse rewards.
arXiv Detail & Related papers (2020-04-12T12:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.