Scalable Signature Kernel Computations for Long Time Series via Local Neumann Series Expansions
- URL: http://arxiv.org/abs/2502.20392v1
- Date: Thu, 27 Feb 2025 18:59:20 GMT
- Title: Scalable Signature Kernel Computations for Long Time Series via Local Neumann Series Expansions
- Authors: Matthew Tamayo-Rios, Alexander Schell, Rima Alaifari,
- Abstract summary: The signature kernel is a recent state-of-the-art tool for analyzing high-dimensional sequential data.<n>We present a novel method for efficiently computing the signature kernel of long, high-dimensional time series.<n>This method achieves substantial performance improvements over state-of-the-art approaches for computing the signature kernel.
- Score: 46.685771141109306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The signature kernel is a recent state-of-the-art tool for analyzing high-dimensional sequential data, valued for its theoretical guarantees and strong empirical performance. In this paper, we present a novel method for efficiently computing the signature kernel of long, high-dimensional time series via dynamically truncated recursive local power series expansions. Building on the characterization of the signature kernel as the solution of a Goursat PDE, our approach employs tilewise Neumann-series expansions to derive rapidly converging power series approximations of the signature kernel that are locally defined on subdomains and propagated iteratively across the entire domain of the Goursat solution by exploiting the geometry of the time series. Algorithmically, this involves solving a system of interdependent local Goursat PDEs by recursively propagating boundary conditions along a directed graph via topological ordering, with dynamic truncation adaptively terminating each local power series expansion when coefficients fall below machine precision, striking an effective balance between computational cost and accuracy. This method achieves substantial performance improvements over state-of-the-art approaches for computing the signature kernel, providing (a) adjustable and superior accuracy, even for time series with very high roughness; (b) drastically reduced memory requirements; and (c) scalability to efficiently handle very long time series (e.g., with up to half a million points or more) on a single GPU. These advantages make our method particularly well-suited for rough-path-assisted machine learning, financial modeling, and signal processing applications that involve very long and highly volatile data.
Related papers
- Numerical Schemes for Signature Kernels [0.5461938536945723]
Signature kernels have emerged as a powerful tool within kernel methods for sequential data.<n>We introduce two advanced numerical schemes that leverage representations of boundary conditions through either approximation or boundary techniques.<n>Our algorithms can be GPU-parallelized to reduce computational complexity from quadratic to linear in the length of the input sequences.
arXiv Detail & Related papers (2025-02-12T15:04:23Z) - A High Order Solver for Signature Kernels [5.899263357689845]
Signature kernels are at the core of several machine learning algorithms for analysing time series.
We introduce new algorithms for the numerical approximation of signature kernels.
arXiv Detail & Related papers (2024-04-01T23:09:52Z) - Random Fourier Signature Features [8.766411351797885]
algebras give rise to one of the most powerful measures of similarity for sequences of arbitrary length called the signature kernel.
Previous algorithms to compute the signature kernel scale quadratically in terms of the length and the number of the sequences.
We develop a random Fourier feature-based acceleration of the signature kernel acting on the inherently non-Euclidean domain of sequences.
arXiv Detail & Related papers (2023-11-20T22:08:17Z) - GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.