Kernel Mean Estimation by Marginalized Corrupted Distributions
- URL: http://arxiv.org/abs/2107.04855v1
- Date: Sat, 10 Jul 2021 15:11:28 GMT
- Title: Kernel Mean Estimation by Marginalized Corrupted Distributions
- Authors: Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun
Wei, Tongliang Liu
- Abstract summary: Estimating the kernel mean in a kernel Hilbert space is a critical component in many kernel learning algorithms.
We present a new kernel mean estimator, called the marginalized kernel mean estimator, which estimates kernel mean under the corrupted distribution.
- Score: 96.9272743070371
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Estimating the kernel mean in a reproducing kernel Hilbert space is a
critical component in many kernel learning algorithms. Given a finite sample,
the standard estimate of the target kernel mean is the empirical average.
Previous works have shown that better estimators can be constructed by
shrinkage methods. In this work, we propose to corrupt data examples with noise
from known distributions and present a new kernel mean estimator, called the
marginalized kernel mean estimator, which estimates kernel mean under the
corrupted distribution. Theoretically, we show that the marginalized kernel
mean estimator introduces implicit regularization in kernel mean estimation.
Empirically, we show on a variety of datasets that the marginalized kernel mean
estimator obtains much lower estimation error than the existing estimators.
Related papers
- Learning to Embed Distributions via Maximum Kernel Entropy [0.0]
Emprimiical data can often be considered as samples from a set of probability distributions.
Kernel methods have emerged as a natural approach for learning to classify these distributions.
We propose a novel objective for the unsupervised learning of data-dependent distribution kernel.
arXiv Detail & Related papers (2024-08-01T13:34:19Z) - Optimal Kernel for Kernel-Based Modal Statistical Methods [8.571896191090744]
Kernel-based modal statistical methods include mode estimation, regression, and clustering.
We study effect of the selection of the kernel function to estimation accuracy of these methods.
arXiv Detail & Related papers (2023-04-20T01:47:35Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Towards Unbiased Random Features with Lower Variance For Stationary
Indefinite Kernels [26.57122949130266]
Our algorithm achieves lower variance and approximation error compared with the existing kernel approximation methods.
With better approximation to the originally selected kernels, improved classification accuracy and regression ability is obtained.
arXiv Detail & Related papers (2021-04-13T13:56:50Z) - How rotational invariance of common kernels prevents generalization in
high dimensions [8.508198765617196]
Kernel ridge regression is well-known to achieve minimax optimal rates in low-dimensional settings.
Recent work establishes consistency for kernel regression under certain assumptions on the ground truth function and the distribution of the input data.
arXiv Detail & Related papers (2021-04-09T08:27:37Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Isolation Distributional Kernel: A New Tool for Point & Group Anomaly
Detection [76.1522587605852]
Isolation Distributional Kernel (IDK) is a new way to measure the similarity between two distributions.
We demonstrate IDK's efficacy and efficiency as a new tool for kernel based anomaly detection for both point and group anomalies.
arXiv Detail & Related papers (2020-09-24T12:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.