The Signature Kernel is the solution of a Goursat PDE
- URL: http://arxiv.org/abs/2006.14794v9
- Date: Sat, 20 Mar 2021 19:58:36 GMT
- Title: The Signature Kernel is the solution of a Goursat PDE
- Authors: Cristopher Salvi, Thomas Cass, James Foster, Terry Lyons, Weixin Yang
- Abstract summary: We show that for continuously differentiable paths, the signature kernel solves a hyperbolic PDE and recognize the connection with a class of differential equations known in the literature as Goursat problems.
This Goursat PDE only depends on the increments of the input sequences, does not require the explicit computation of signatures and can be solved efficiently using state-of-the-arthyperbolic PDE numerical solvers.
We empirically demonstrate the effectiveness of our PDE kernel as a machine learning tool in various machine learning applications dealing with sequential data.
- Score: 11.107838656561766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been an increased interest in the development of kernel
methods for learning with sequential data. The signature kernel is a learning
tool with potential to handle irregularly sampled, multivariate time series. In
"Kernels for sequentially ordered data" the authors introduced a kernel trick
for the truncated version of this kernel avoiding the exponential complexity
that would have been involved in a direct computation. Here we show that for
continuously differentiable paths, the signature kernel solves a hyperbolic PDE
and recognize the connection with a well known class of differential equations
known in the literature as Goursat problems. This Goursat PDE only depends on
the increments of the input sequences, does not require the explicit
computation of signatures and can be solved efficiently using
state-of-the-arthyperbolic PDE numerical solvers, giving a kernel trick for the
untruncated signature kernel, with the same raw complexity as the method from
"Kernels for sequentially ordered data", but with the advantage that the PDE
numerical scheme is well suited for GPU parallelization, which effectively
reduces the complexity by a full order of magnitude in the length of the input
sequences. In addition, we extend the previous analysis to the space of
geometric rough paths and establish, using classical results from rough path
theory, that the rough version of the signature kernel solves a rough integral
equation analogous to the aforementioned Goursat PDE. Finally, we empirically
demonstrate the effectiveness of our PDE kernel as a machine learning tool in
various machine learning applications dealing with sequential data. We release
the library sigkernel publicly available at
https://github.com/crispitagorico/sigkernel.
Related papers
- Numerical Schemes for Signature Kernels [0.5461938536945723]
Signature kernels have emerged as a powerful tool within kernel methods for sequential data.
We introduce two advanced numerical schemes that leverage representations of boundary conditions through either approximation or boundary techniques.
Our algorithms can be GPU-parallelized to reduce computational complexity from quadratic to linear in the length of the input sequences.
arXiv Detail & Related papers (2025-02-12T15:04:23Z) - A High Order Solver for Signature Kernels [5.899263357689845]
Signature kernels are at the core of several machine learning algorithms for analysing time series.
We introduce new algorithms for the numerical approximation of signature kernels.
arXiv Detail & Related papers (2024-04-01T23:09:52Z) - Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian
Processes [3.750429354590631]
We present a sparse Cholesky factorization algorithm for dense kernel matrices.
We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs.
arXiv Detail & Related papers (2023-04-03T18:35:28Z) - Structural Kernel Search via Bayesian Optimization and Symbolical
Optimal Transport [5.1672267755831705]
For Gaussian processes, selecting the kernel is a crucial task, often done manually by the expert.
We propose a novel, efficient search method through a general, structured kernel space.
arXiv Detail & Related papers (2022-10-21T09:30:21Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Fast Sketching of Polynomial Kernels of Polynomial Degree [61.83993156683605]
kernel is especially important as other kernels can often be approximated by the kernel via a Taylor series expansion.
Recent techniques in sketching reduce the dependence in the running time on the degree oblivious $q$ of the kernel.
We give a new sketch which greatly improves upon this running time, by removing the dependence on $q$ in the leading order term.
arXiv Detail & Related papers (2021-08-21T02:14:55Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data [16.463077353773603]
We develop SigGPDE, a new scalable sparse variational inference framework for Gaussian Processes (GPs) on sequential data.
We show that the gradients of the GP signature kernel are solutions of a hyperbolic partial differential equation (PDE)
This theoretical insight allows us to build an efficient back-propagation algorithm to optimize the ELBO.
arXiv Detail & Related papers (2021-05-10T09:10:17Z) - Kernel-Based Reinforcement Learning: A Finite-Time Analysis [53.47210316424326]
We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards.
We empirically validate our approach in continuous MDPs with sparse rewards.
arXiv Detail & Related papers (2020-04-12T12:23:46Z) - PolyScientist: Automatic Loop Transformations Combined with Microkernels
for Optimization of Deep Learning Primitives [55.79741270235602]
We develop a hybrid solution to the development of deep learning kernels.
We use the advanced polyhedral technology to automatically tune the outer loops for performance.
arXiv Detail & Related papers (2020-02-06T08:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.