A User's Guide to $\texttt{KSig}$: GPU-Accelerated Computation of the Signature Kernel
- URL: http://arxiv.org/abs/2501.07145v2
- Date: Tue, 14 Jan 2025 06:38:10 GMT
- Title: A User's Guide to $\texttt{KSig}$: GPU-Accelerated Computation of the Signature Kernel
- Authors: Csaba Tóth, Danilo Jr Dela Cruz, Harald Oberhauser,
- Abstract summary: The signature kernel is a positive definite kernel for sequential and temporal data.<n>In this chapter, we give a short introduction to $textttKSig$, a $textttScikit-Learn$ compatible Python package that implements various GPU-accelerated algorithms for computing signature kernels.
- Score: 12.111848705677138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The signature kernel is a positive definite kernel for sequential and temporal data that has become increasingly popular in machine learning applications due to powerful theoretical guarantees, strong empirical performance, and recently introduced various scalable variations. In this chapter, we give a short introduction to $\texttt{KSig}$, a $\texttt{Scikit-Learn}$ compatible Python package that implements various GPU-accelerated algorithms for computing signature kernels, and performing downstream learning tasks. We also introduce a new algorithm based on tensor sketches which gives strong performance compared to existing algorithms. The package is available at https://github.com/tgcsaba/ksig.
Related papers
- Ramp Up NTT in Record Time using GPU-Accelerated Algorithms and LLM-based Code Generation [11.120838175165986]
Homomorphic encryption (HE) is a core building block in privacy-preserving machine learning (PPML)
Many GPU-accelerated cryptographic schemes have been proposed to improve the performance of HE.
Given the powerful code generation capabilities of large language models (LLMs), we aim to explore their potential to automatically generate practical GPU-friendly algorithm code.
arXiv Detail & Related papers (2025-02-16T12:53:23Z) - Keras Sig: Efficient Path Signature Computation on GPU in Keras 3 [0.0]
Keras Sig is a high-performance pythonic library designed to compute path signature for deep learning applications.
Entirely built in Keras 3, textitKeras Sig leverages the seamless integration with the mostly used deep learning backends such as PyTorch, JAX and GPU.
arXiv Detail & Related papers (2025-01-14T22:00:01Z) - Explore as a Storm, Exploit as a Raindrop: On the Benefit of Fine-Tuning Kernel Schedulers with Coordinate Descent [48.791943145735]
We show the potential to reduce Ansor's search time while enhancing kernel quality.
We apply this approach to the first 300 kernels that Ansor generates.
This result has been replicated in 20 well-known deep-learning models.
arXiv Detail & Related papers (2024-06-28T16:34:22Z) - SIP: Autotuning GPU Native Schedules via Stochastic Instruction Perturbation [0.0]
Large language models (LLMs) have become a significant workload since their appearance.
They are also computationally expensive as they have billions of parameters and are trained with massive amounts of data.
Recent works have developed dedicated kernels for LLM training and inference instead of relying on compilergenerated ones, so that hardware resources are as fully utilized as possible.
arXiv Detail & Related papers (2024-03-25T15:26:50Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Sub-quadratic Algorithms for Kernel Matrices via Kernel Density
Estimation [24.166833799353476]
We develop efficient reductions from $textitweighted edge sampling$ on kernel graphs, $textitsimulating random walks$ on kernel graphs, and $textitweighted sampling$ on matrices to Kernel Density Estimation.
Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest.
arXiv Detail & Related papers (2022-12-01T16:42:56Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Neural Networks can Learn Representations with Gradient Descent [68.95262816363288]
In specific regimes, neural networks trained by gradient descent behave like kernel methods.
In practice, it is known that neural networks strongly outperform their associated kernels.
arXiv Detail & Related papers (2022-06-30T09:24:02Z) - Giga-scale Kernel Matrix Vector Multiplication on GPU [9.106412307976067]
Kernel matrix vector multiplication (KMVM) is a ubiquitous operation in machine learning and scientific computing, spanning from the kernel literature to signal processing.
We propose a novel approximation procedure coined Faster-Fast and Free Memory Method ($textF3$M) to address these scaling issues for KMVM.
We show that $textF3$M can compute a full KMVM for a billion points emphin under one minute on a high-end GPU, leading to a significant speed-up in comparison to existing CPU methods.
arXiv Detail & Related papers (2022-02-02T15:28:15Z) - Gaussian Process Bandit Optimization with Few Batches [49.896920704012395]
We introduce a batch algorithm inspired by finite-arm bandit algorithms.
We show that it achieves the cumulative regret upper bound $Oast(sqrtTgamma_T)$ using $O(loglog T)$ batches within time horizon $T$.
In addition, we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches.
arXiv Detail & Related papers (2021-10-15T00:54:04Z) - Fast Sketching of Polynomial Kernels of Polynomial Degree [61.83993156683605]
kernel is especially important as other kernels can often be approximated by the kernel via a Taylor series expansion.
Recent techniques in sketching reduce the dependence in the running time on the degree oblivious $q$ of the kernel.
We give a new sketch which greatly improves upon this running time, by removing the dependence on $q$ in the leading order term.
arXiv Detail & Related papers (2021-08-21T02:14:55Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Streaming Coresets for Symmetric Tensor Factorization [9.181791777532608]
We show how to factorize tensors efficiently in the streaming setting.
We introduce two novel algorithmic techniques: online filtering and kernelization.
We show applications of our algorithms in learning single topic modeling.
arXiv Detail & Related papers (2020-06-01T19:55:34Z) - Kernel Operations on the GPU, with Autodiff, without Memory Overflows [5.669790037378094]
The KeOps library provides a fast and memory-efficient GPU support for tensors whose entries are given by a mathematical formula.
KeOps alleviates the major bottleneck of tensor-centric libraries for kernel and geometric applications: memory consumption.
KeOps combines optimized C++/CUDA schemes with binders for high-level languages: Python (Numpy and PyTorch), Matlab and R.
arXiv Detail & Related papers (2020-03-27T08:54:10Z) - Signatory: differentiable computations of the signature and logsignature
transforms, on both CPU and GPU [13.503274710499971]
Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms.
It implements new features not available in previous libraries, such as efficient precomputation strategies.
The library operates as a Python wrapper around C++, and is compatible with the PyTorch ecosystem.
arXiv Detail & Related papers (2020-01-03T03:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.