Convolutional Shapelet Transform: A new approach for time series
shapelets
- URL: http://arxiv.org/abs/2109.13514v1
- Date: Tue, 28 Sep 2021 06:30:42 GMT
- Title: Convolutional Shapelet Transform: A new approach for time series
shapelets
- Authors: Antoine Guillaume, Christel Vrain, Elloumi Wael
- Abstract summary: We present a new formulation of time series shapelets including the notion of dilation, and a shapelet extraction method based on convolutional kernels.
We show that our method improves on the state-of-the-art for shapelet algorithms, and we show that it can be used to interpret results from convolutional kernels.
- Score: 1.160208922584163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shapelet-based algorithms are widely used for time series classification
because of their ease of interpretation, but they are currently outperformed,
notably by methods using convolutional kernels, capable of reaching
state-of-the-art performance while being highly scalable. We present a new
formulation of time series shapelets including the notion of dilation, and a
shapelet extraction method based on convolutional kernels, which is able to
target the discriminant information identified by convolutional kernels.
Experiments performed on 108 datasets show that our method improves on the
state-of-the-art for shapelet algorithms, and we show that it can be used to
interpret results from convolutional kernels.
Related papers
- Time-series attribution maps with regularized contrastive learning [1.5503410315996757]
gradient-based attribution methods aim to explain decisions of deep learning models but so far lack identifiability guarantees.
Here, we propose a method to generate attribution maps with identifiability guarantees by developing a regularized contrastive learning algorithm trained on time-series data.
We show theoretically that xCEBRA has favorable properties for identifying the Jacobian matrix of the data generating process.
arXiv Detail & Related papers (2025-02-17T18:34:25Z) - Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review [59.856222854472605]
This tutorial provides an in-depth guide on inference-time guidance and alignment methods for optimizing downstream reward functions in diffusion models.
practical applications in fields such as biology often require sample generation that maximizes specific metrics.
We discuss (1) fine-tuning methods combined with inference-time techniques, (2) inference-time algorithms based on search algorithms such as Monte Carlo tree search, and (3) connections between inference-time algorithms in language models and diffusion models.
arXiv Detail & Related papers (2025-01-16T17:37:35Z) - Correlating Time Series with Interpretable Convolutional Kernels [18.77493756204539]
This study addresses the problem of convolutional kernel learning in time series data.
We use tensor computations to reformulate the convolutional kernel learning problem in the form of tensors.
This study lays an insightful foundation for automatically learning convolutional kernels from time series data.
arXiv Detail & Related papers (2024-09-02T16:29:21Z) - Fast and Scalable Multi-Kernel Encoder Classifier [4.178980693837599]
The proposed method facilitates fast and scalable kernel matrix embedding, and seamlessly integrates multiple kernels to enhance the learning process.
Our theoretical analysis offers a population-level characterization of this approach using random variables.
arXiv Detail & Related papers (2024-06-04T10:34:40Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Nonparametric Factor Trajectory Learning for Dynamic Tensor
Decomposition [20.55025648415664]
We propose NON FActor Trajectory learning for dynamic tensor decomposition (NONFAT)
We use a second-level GP to sample the entry values and to capture the temporal relationship between the entities.
We have shown the advantage of our method in several real-world applications.
arXiv Detail & Related papers (2022-07-06T05:33:00Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z) - FKAConv: Feature-Kernel Alignment for Point Cloud Convolution [75.85619090748939]
We provide a formulation to relate and analyze a number of point convolution methods.
We also propose our own convolution variant, that separates the estimation of geometry-less kernel weights.
We show competitive results on classification and semantic segmentation benchmarks.
arXiv Detail & Related papers (2020-04-09T10:12:45Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.