Positively Weighted Kernel Quadrature via Subsampling
- URL: http://arxiv.org/abs/2107.09597v1
- Date: Tue, 20 Jul 2021 16:18:56 GMT
- Title: Positively Weighted Kernel Quadrature via Subsampling
- Authors: Satoshi Hayakawa, Harald Oberhauser, Terry Lyons
- Abstract summary: We study kernel quadrature rules with positive weights for probability measures on general domains.
This results in effective algorithms to construct kernel quadrature rules with positive weights and small worst-case error.
- Score: 8.250374560598495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study kernel quadrature rules with positive weights for probability
measures on general domains. Our theoretical analysis combines the spectral
properties of the kernel with random sampling of points. This results in
effective algorithms to construct kernel quadrature rules with positive weights
and small worst-case error. Besides additional robustness, our numerical
experiments indicate that this can achieve fast convergence rates that compete
with the optimal bounds in well-known examples.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Efficient Numerical Integration in Reproducing Kernel Hilbert Spaces via
Leverage Scores Sampling [16.992480926905067]
We consider the problem of approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.
We propose an efficient procedure which exploits a small i.i.d. random subset of $mn$ samples drawn either uniformly or using approximate leverage scores from the initial observations.
arXiv Detail & Related papers (2023-11-22T17:44:18Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Kernel quadrature with randomly pivoted Cholesky [0.0]
This paper presents new quadrature rules for functions in a reproducing kernel Hilbert space using nodes drawn by a sampling algorithm known as randomly pivoted Cholesky.
Results show that randomly pivoted Cholesky is fast and achieves comparable quadrature error rates to more computationally expensive quadrature schemes.
arXiv Detail & Related papers (2023-06-06T18:33:40Z) - Sampling-based Nystr\"om Approximation and Kernel Quadrature [8.658596218544774]
We analyze the Nystr"om approximation of a positive definite kernel associated with a probability measure.
We first prove an improved error bound for the conventional Nystr"om approximation with i.i.d. sampling and singular-value decomposition.
We then introduce a refined selection of subspaces in Nystr"om approximation with theoretical guarantees that is applicable to non-i.i.d. landmark points.
arXiv Detail & Related papers (2023-01-23T16:05:56Z) - Targeted Separation and Convergence with Kernel Discrepancies [61.973643031360254]
kernel-based discrepancy measures are required to (i) separate a target P from other probability measures or (ii) control weak convergence to P.
In this article we derive new sufficient and necessary conditions to ensure (i) and (ii)
For MMDs on separable metric spaces, we characterize those kernels that separate Bochner embeddable measures and introduce simple conditions for separating all measures with unbounded kernels.
arXiv Detail & Related papers (2022-09-26T16:41:16Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Towards a Unified Quadrature Framework for Large-Scale Kernel Machines [35.32894170512829]
We develop a quadrature framework for large-scale kernel machines via a numerical integration representation.
We leverage fully symmetric interpolatory rules to efficiently compute quadrature nodes and associated weights for kernel approximation.
arXiv Detail & Related papers (2020-11-03T12:48:07Z) - Fast Learning in Reproducing Kernel Krein Spaces via Signed Measures [31.986482149142503]
We cast this question as a distribution view by introducing the emphsigned measure
A series of non-PD kernels can be associated with the linear combination of specific finite Borel measures.
Specifically, this solution is also computationally implementable in practice to scale non-PD kernels in large sample cases.
arXiv Detail & Related papers (2020-05-30T12:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.