Geometric Learning with Positively Decomposable Kernels
- URL: http://arxiv.org/abs/2310.13821v1
- Date: Fri, 20 Oct 2023 21:18:04 GMT
- Title: Geometric Learning with Positively Decomposable Kernels
- Authors: Nathael Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega, Salem Said
- Abstract summary: We propose the use of reproducing kernel Krein space (RKKS) based methods, which require only kernels that admit a positive decomposition.
We show that one does not need to access this decomposition in order to learn in RKKS.
- Score: 7.155139483398897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel methods are powerful tools in machine learning. Classical kernel
methods are based on positive-definite kernels, which map data spaces into
reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces,
positive-definite kernels are difficult to come by. In this case, we propose
the use of reproducing kernel Krein space (RKKS) based methods, which require
only kernels that admit a positive decomposition. We show that one does not
need to access this decomposition in order to learn in RKKS. We then
investigate the conditions under which a kernel is positively decomposable. We
show that invariant kernels admit a positive decomposition on homogeneous
spaces under tractable regularity assumptions. This makes them much easier to
construct than positive-definite kernels, providing a route for learning with
kernels for non-Euclidean data. By the same token, this provides theoretical
foundations for RKKS-based methods in general.
Related papers
- Learning "best" kernels from data in Gaussian process regression. With
application to aerodynamics [0.4588028371034406]
We introduce algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.
A first class of algorithms is kernel flow, which was introduced in a context of classification in machine learning.
A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal.
arXiv Detail & Related papers (2022-06-03T07:50:54Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Kernel Mean Estimation by Marginalized Corrupted Distributions [96.9272743070371]
Estimating the kernel mean in a kernel Hilbert space is a critical component in many kernel learning algorithms.
We present a new kernel mean estimator, called the marginalized kernel mean estimator, which estimates kernel mean under the corrupted distribution.
arXiv Detail & Related papers (2021-07-10T15:11:28Z) - Reproducing Kernel Hilbert Space, Mercer's Theorem, Eigenfunctions,
Nystr\"om Method, and Use of Kernels in Machine Learning: Tutorial and Survey [5.967999555890417]
We start with reviewing the history of kernels in functional analysis and machine learning.
We introduce types of use of kernels in machine learning including kernel methods, kernel learning by semi-definite programming, Hilbert-Schmidt independence criterion, maximum mean discrepancy, kernel mean embedding, and kernel dimensionality reduction.
This paper can be useful for various fields of science including machine learning, dimensionality reduction, functional analysis in mathematics, and mathematical physics in quantum mechanics.
arXiv Detail & Related papers (2021-06-15T21:29:12Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Domain Adaptive Learning Based on Sample-Dependent and Learnable Kernels [2.1485350418225244]
This paper proposes a Domain Adaptive Learning method based on Sample-Dependent and Learnable Kernels (SDLK-DAL)
The first contribution of our work is to propose a sample-dependent and learnable Positive Definite Quadratic Kernel function (PDQK) framework.
We conduct a series of experiments that the RKHS determined by PDQK replaces those in several state-of-the-art DAL algorithms, and our approach achieves better performance.
arXiv Detail & Related papers (2021-02-18T13:55:06Z) - Isolation Distributional Kernel: A New Tool for Point & Group Anomaly
Detection [76.1522587605852]
Isolation Distributional Kernel (IDK) is a new way to measure the similarity between two distributions.
We demonstrate IDK's efficacy and efficiency as a new tool for kernel based anomaly detection for both point and group anomalies.
arXiv Detail & Related papers (2020-09-24T12:25:43Z) - Strong Uniform Consistency with Rates for Kernel Density Estimators with
General Kernels on Manifolds [11.927892660941643]
We show how to handle kernel density estimation with intricate kernels not designed by the user.
The isotropic kernels considered in this paper are different from the kernels in the Vapnik-Chervonenkis class that are frequently considered in statistics society.
arXiv Detail & Related papers (2020-07-13T14:36:06Z) - Learning Deep Kernels for Non-Parametric Two-Sample Tests [50.92621794426821]
We propose a class of kernel-based two-sample tests, which aim to determine whether two sets of samples are drawn from the same distribution.
Our tests are constructed from kernels parameterized by deep neural nets, trained to maximize test power.
arXiv Detail & Related papers (2020-02-21T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.