Spectral Truncation Kernels: Noncommutativity in $C^*$-algebraic Kernel Machines
- URL: http://arxiv.org/abs/2405.17823v4
- Date: Mon, 10 Mar 2025 09:51:10 GMT
- Title: Spectral Truncation Kernels: Noncommutativity in $C^*$-algebraic Kernel Machines
- Authors: Yuka Hashimoto, Ayoub Hafid, Masahiro Ikeda, Hachem Kadri,
- Abstract summary: We propose a new class of positive definite kernels based on the spectral truncation.<n>We show that the proposed kernels fill the gap between existing separable and commutative kernels.<n>The flexibility of the proposed class of kernels allows us to go beyond previous separable and commutative kernels.
- Score: 12.11705128358537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: $C^*$-algebra-valued kernels could pave the way for the next generation of kernel machines. To further our fundamental understanding of learning with $C^*$-algebraic kernels, we propose a new class of positive definite kernels based on the spectral truncation. We focus on kernels whose inputs and outputs are vectors or functions and generalize typical kernels by introducing the noncommutativity of the products appearing in the kernels. The noncommutativity induces interactions along the data function domain. We show that the proposed kernels fill the gap between existing separable and commutative kernels. We also propose a deep learning perspective to obtain a more flexible framework. The flexibility of the proposed class of kernels allows us to go beyond previous separable and commutative kernels, addressing two of the foremost issues regarding learning in vector-valued RKHSs, namely the choice of the kernel and the computational cost.
Related papers
- Geometric Learning with Positively Decomposable Kernels [6.5497574505866885]
We propose the use of reproducing kernel Krein space (RKKS) based methods, which require only kernels that admit a positive decomposition.
We show that one does not need to access this decomposition in order to learn in RKKS.
arXiv Detail & Related papers (2023-10-20T21:18:04Z) - Kernel Subspace and Feature Extraction [7.424262881242935]
We study kernel methods in machine learning from the perspective of feature subspace.
We construct a kernel from Hirschfeld--Gebelein--R'enyi maximal correlation functions, coined the maximal correlation kernel, and demonstrate its information-theoretic optimality.
arXiv Detail & Related papers (2023-01-04T02:46:11Z) - On Kernel Regression with Data-Dependent Kernels [0.0]
We consider kernel regression in which the kernel may be updated after seeing the training data.
Connections to the view of deep neural networks as data-dependent kernel learners are discussed.
arXiv Detail & Related papers (2022-09-04T20:46:01Z) - Neural Networks can Learn Representations with Gradient Descent [68.95262816363288]
In specific regimes, neural networks trained by gradient descent behave like kernel methods.
In practice, it is known that neural networks strongly outperform their associated kernels.
arXiv Detail & Related papers (2022-06-30T09:24:02Z) - Neural Networks as Kernel Learners: The Silent Alignment Effect [86.44610122423994]
Neural networks in the lazy training regime converge to kernel machines.
We show that this can indeed happen due to a phenomenon we term silent alignment.
We also demonstrate that non-whitened data can weaken the silent alignment effect.
arXiv Detail & Related papers (2021-10-29T18:22:46Z) - Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
Features in Images [55.60727570036073]
The lack of precisely interpreting kernels in convolutional neural networks (CNNs) is one main obstacle to wide applications of deep learning models in real scenarios.
A simple yet effective optimization method is proposed to interpret the activation of any kernel of interest in CNN models.
arXiv Detail & Related papers (2021-08-25T05:48:44Z) - Fast Sketching of Polynomial Kernels of Polynomial Degree [61.83993156683605]
kernel is especially important as other kernels can often be approximated by the kernel via a Taylor series expansion.
Recent techniques in sketching reduce the dependence in the running time on the degree oblivious $q$ of the kernel.
We give a new sketch which greatly improves upon this running time, by removing the dependence on $q$ in the leading order term.
arXiv Detail & Related papers (2021-08-21T02:14:55Z) - Taming Nonconvexity in Kernel Feature Selection---Favorable Properties
of the Laplace Kernel [77.73399781313893]
A challenge is to establish the objective function of kernel-based feature selection.
The gradient-based algorithms available for non-global optimization are only able to guarantee convergence to local minima.
arXiv Detail & Related papers (2021-06-17T11:05:48Z) - Reproducing Kernel Hilbert Space, Mercer's Theorem, Eigenfunctions,
Nystr\"om Method, and Use of Kernels in Machine Learning: Tutorial and Survey [5.967999555890417]
We start with reviewing the history of kernels in functional analysis and machine learning.
We introduce types of use of kernels in machine learning including kernel methods, kernel learning by semi-definite programming, Hilbert-Schmidt independence criterion, maximum mean discrepancy, kernel mean embedding, and kernel dimensionality reduction.
This paper can be useful for various fields of science including machine learning, dimensionality reduction, functional analysis in mathematics, and mathematical physics in quantum mechanics.
arXiv Detail & Related papers (2021-06-15T21:29:12Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Covariant quantum kernels for data with group structure [1.51714450051254]
We introduce a class of quantum kernels that can be used for data with a group structure.
We apply this method to a learning problem on a coset-space that embodies the structure of many essential learning problems on groups.
arXiv Detail & Related papers (2021-05-07T17:38:58Z) - Entangled Kernels -- Beyond Separability [10.381276986079865]
We consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels.
We propose a new view on operator-valued kernels and define a general family of kernels that encompasses previously known operator-valued kernels.
Within this framework, we introduce another novel class of operator-valued kernels called entangled kernels that are not separable.
arXiv Detail & Related papers (2021-01-14T09:18:02Z) - Isolation Distributional Kernel: A New Tool for Point & Group Anomaly
Detection [76.1522587605852]
Isolation Distributional Kernel (IDK) is a new way to measure the similarity between two distributions.
We demonstrate IDK's efficacy and efficiency as a new tool for kernel based anomaly detection for both point and group anomalies.
arXiv Detail & Related papers (2020-09-24T12:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.