Strong Uniform Consistency with Rates for Kernel Density Estimators with
General Kernels on Manifolds
- URL: http://arxiv.org/abs/2007.06408v2
- Date: Tue, 8 Jun 2021 16:45:55 GMT
- Title: Strong Uniform Consistency with Rates for Kernel Density Estimators with
General Kernels on Manifolds
- Authors: Hau-Tieng Wu and Nan Wu
- Abstract summary: We show how to handle kernel density estimation with intricate kernels not designed by the user.
The isotropic kernels considered in this paper are different from the kernels in the Vapnik-Chervonenkis class that are frequently considered in statistics society.
- Score: 11.927892660941643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When analyzing modern machine learning algorithms, we may need to handle
kernel density estimation (KDE) with intricate kernels that are not designed by
the user and might even be irregular and asymmetric. To handle this emerging
challenge, we provide a strong uniform consistency result with the $L^\infty$
convergence rate for KDE on Riemannian manifolds with Riemann integrable
kernels (in the ambient Euclidean space). We also provide an $L^1$ consistency
result for kernel density estimation on Riemannian manifolds with Lebesgue
integrable kernels. The isotropic kernels considered in this paper are
different from the kernels in the Vapnik-Chervonenkis class that are frequently
considered in statistics society. We illustrate the difference when we apply
them to estimate the probability density function. Moreover, we elaborate the
delicate difference when the kernel is designed on the intrinsic manifold and
on the ambient Euclidian space, both might be encountered in practice. At last,
we prove the necessary and sufficient condition for an isotropic kernel to be
Riemann integrable on a submanifold in the Euclidean space.
Related papers
- Variable Hyperparameterized Gaussian Kernel using Displaced Squeezed Vacuum State [2.1408617023874443]
A multimode coherent state can generate the Gaussian kernel with a constant value of hyper parameter.
This constant hyper parameter has limited the application of the Gaussian kernel when it is applied to complex learning problems.
We realize the variable hyper parameterized kernel with a multimode-displaced squeezed vacuum state.
arXiv Detail & Related papers (2024-03-18T08:25:56Z) - Geometric Learning with Positively Decomposable Kernels [6.5497574505866885]
We propose the use of reproducing kernel Krein space (RKKS) based methods, which require only kernels that admit a positive decomposition.
We show that one does not need to access this decomposition in order to learn in RKKS.
arXiv Detail & Related papers (2023-10-20T21:18:04Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Gaussian Processes on Distributions based on Regularized Optimal
Transport [2.905751301655124]
We present a novel kernel over the space of probability measures based on the dual formulation of optimal regularized transport.
We prove that this construction enables to obtain a valid kernel, by using the Hilbert norms.
We provide theoretical guarantees on the behaviour of a Gaussian process based on this kernel.
arXiv Detail & Related papers (2022-10-12T20:30:23Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Metrizing Weak Convergence with Maximum Mean Discrepancies [88.54422104669078]
This paper characterizes the maximum mean discrepancies (MMD) that metrize the weak convergence of probability measures for a wide class of kernels.
We prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, metrizes the weak convergence of probability measures if and only if k is continuous.
arXiv Detail & Related papers (2020-06-16T15:49:33Z) - Kernel interpolation with continuous volume sampling [11.172382217477129]
We introduce and analyse continuous volume sampling (VS) of a discrete distribution introduced in 2006.
We prove almost optimal bounds for translates and quadrature under VS.
We emphasize that, unlike previous randomized approaches that rely on regularized leverage scores or determinantal point processes, evaluating the pdf of VS only requires pointwise evaluations of the kernel.
arXiv Detail & Related papers (2020-02-22T10:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.