A new trigonometric kernel function for SVM
- URL: http://arxiv.org/abs/2210.08585v1
- Date: Sun, 16 Oct 2022 17:10:52 GMT
- Title: A new trigonometric kernel function for SVM
- Authors: Sajad Fathi Hafshejani, Zahra Moberfard
- Abstract summary: We introduce a new trigonometric kernel function containing one parameter for the machine learning algorithms.
We also conduct an empirical evaluation on the kernel-SVM and kernel-SVR methods and demonstrate its strong performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, several machine learning algorithms have been proposed.
Among of them, kernel approaches have been considered as a powerful tool for
classification. Using an appropriate kernel function can significantly improve
the accuracy of the classification. The main goal of this paper is to introduce
a new trigonometric kernel function containing one parameter for the machine
learning algorithms. Using simple mathematical tools, several useful properties
of the proposed kernel function are presented. We also conduct an empirical
evaluation on the kernel-SVM and kernel-SVR methods and demonstrate its strong
performance compared to other kernel functions.
Related papers
- On the Approximation of Kernel functions [0.0]
The paper addresses approximations of the kernel itself.
For the Hilbert Gauss kernel on the unit cube, the paper establishes an upper bound of the associated eigenfunctions.
This improvement confirms low rank approximation methods such as the Nystr"om method.
arXiv Detail & Related papers (2024-03-11T13:50:07Z) - Snacks: a fast large-scale kernel SVM solver [0.8602553195689513]
Snacks is a new large-scale solver for Kernel Support Vector Machines.
Snacks relies on a Nystr"om approximation of the kernel matrix and an accelerated variant of the subgradient method.
arXiv Detail & Related papers (2023-04-17T04:19:20Z) - Efficient Convex Algorithms for Universal Kernel Learning [46.573275307034336]
An ideal set of kernels should: admit a linear parameterization (for tractability); dense in the set of all kernels (for accuracy)
Previous algorithms for optimization of kernels were limited to classification and relied on computationally complex Semidefinite Programming (SDP) algorithms.
We propose a SVD-QCQPQP algorithm which dramatically reduces the computational complexity as compared with previous SDP-based approaches.
arXiv Detail & Related papers (2023-04-15T04:57:37Z) - Kernel Subspace and Feature Extraction [7.424262881242935]
We study kernel methods in machine learning from the perspective of feature subspace.
We construct a kernel from Hirschfeld--Gebelein--R'enyi maximal correlation functions, coined the maximal correlation kernel, and demonstrate its information-theoretic optimality.
arXiv Detail & Related papers (2023-01-04T02:46:11Z) - Efficient Approximate Kernel Based Spike Sequence Classification [56.2938724367661]
Machine learning models, such as SVM, require a definition of distance/similarity between pairs of sequences.
Exact methods yield better classification performance, but they pose high computational costs.
We propose a series of ways to improve the performance of the approximate kernel in order to enhance its predictive performance.
arXiv Detail & Related papers (2022-09-11T22:44:19Z) - Learning "best" kernels from data in Gaussian process regression. With
application to aerodynamics [0.4588028371034406]
We introduce algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.
A first class of algorithms is kernel flow, which was introduced in a context of classification in machine learning.
A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal.
arXiv Detail & Related papers (2022-06-03T07:50:54Z) - Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
Features in Images [55.60727570036073]
The lack of precisely interpreting kernels in convolutional neural networks (CNNs) is one main obstacle to wide applications of deep learning models in real scenarios.
A simple yet effective optimization method is proposed to interpret the activation of any kernel of interest in CNN models.
arXiv Detail & Related papers (2021-08-25T05:48:44Z) - Taming Nonconvexity in Kernel Feature Selection---Favorable Properties
of the Laplace Kernel [77.73399781313893]
A challenge is to establish the objective function of kernel-based feature selection.
The gradient-based algorithms available for non-global optimization are only able to guarantee convergence to local minima.
arXiv Detail & Related papers (2021-06-17T11:05:48Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Face Verification via learning the kernel matrix [9.414572104591027]
kernel function is introduced to solve the nonlinear pattern recognition problem.
A promising approach is to learn the kernel from data automatically.
In this paper, the nonlinear face verification via learning the kernel matrix is proposed.
arXiv Detail & Related papers (2020-01-21T03:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.