Low-dimensional Interpretable Kernels with Conic Discriminant Functions
for Classification
- URL: http://arxiv.org/abs/2007.08986v1
- Date: Fri, 17 Jul 2020 13:58:54 GMT
- Title: Low-dimensional Interpretable Kernels with Conic Discriminant Functions
for Classification
- Authors: Gurhan Ceylan and S. Ilker Birbil
- Abstract summary: Kernels are often developed as implicit mapping functions that show impressive predictive power due to their high-dimensional feature space representations.
In this study, we gradually construct a series of simple feature maps that lead to a collection of interpretable low-dimensional kernels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Kernels are often developed and used as implicit mapping functions that show
impressive predictive power due to their high-dimensional feature space
representations. In this study, we gradually construct a series of simple
feature maps that lead to a collection of interpretable low-dimensional
kernels. At each step, we keep the original features and make sure that the
increase in the dimension of input data is extremely low, so that the resulting
discriminant functions remain interpretable and amenable to fast training.
Despite our persistence on interpretability, we obtain high accuracy results
even without in-depth hyperparameter tuning. Comparison of our results against
several well-known kernels on benchmark datasets show that the proposed kernels
are competitive in terms of prediction accuracy, while the training times are
significantly lower than those obtained with state-of-the-art kernel
implementations.
Related papers
- A Unifying Perspective on Non-Stationary Kernels for Deeper Gaussian Processes [0.9558392439655016]
We show a variety of kernels in action using representative datasets, carefully study their properties, and compare their performances.
Based on our findings, we propose a new kernel that combines some of the identified advantages of existing kernels.
arXiv Detail & Related papers (2023-09-18T18:34:51Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Joint Embedding Self-Supervised Learning in the Kernel Regime [21.80241600638596]
Self-supervised learning (SSL) produces useful representations of data without access to any labels for classifying the data.
We extend this framework to incorporate algorithms based on kernel methods where embeddings are constructed by linear maps acting on the feature space of a kernel.
We analyze our kernel model on small datasets to identify common features of self-supervised learning algorithms and gain theoretical insights into their performance on downstream tasks.
arXiv Detail & Related papers (2022-09-29T15:53:19Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior [26.52863547394537]
We present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior.
Our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
arXiv Detail & Related papers (2020-12-21T13:41:15Z) - Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning [29.221457769884648]
We propose a general form of explicit, input-dependent, measure-valued warpings for learning nonstationary kernels.
The proposed learning algorithm warps inputs as conditional Gaussian measures that control the smoothness of a standard stationary kernel.
We demonstrate a remarkable efficiency in the number of parameters of the warping functions in learning problems with both small and large data regimes.
arXiv Detail & Related papers (2020-10-09T01:10:08Z) - Federated Doubly Stochastic Kernel Learning for Vertically Partitioned
Data [93.76907759950608]
We propose a doubly kernel learning algorithm for vertically partitioned data.
We show that FDSKL is significantly faster than state-of-the-art federated learning methods when dealing with kernels.
arXiv Detail & Related papers (2020-08-14T05:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.