Castor: Competing shapelets for fast and accurate time series classification
- URL: http://arxiv.org/abs/2403.13176v1
- Date: Tue, 19 Mar 2024 22:05:32 GMT
- Title: Castor: Competing shapelets for fast and accurate time series classification
- Authors: Isak Samsten, Zed Lee,
- Abstract summary: Castor is a simple, efficient, and accurate time series classification algorithm.
We show that Castor yields transformations that are significantly more accurate than several state-of-the-art classifiers.
- Score: 0.9208007322096533
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Shapelets are discriminative subsequences, originally embedded in shapelet-based decision trees but have since been extended to shapelet-based transformations. We propose Castor, a simple, efficient, and accurate time series classification algorithm that utilizes shapelets to transform time series. The transformation organizes shapelets into groups with varying dilation and allows the shapelets to compete over the time context to construct a diverse feature representation. By organizing the shapelets into groups, we enable the transformation to transition between levels of competition, resulting in methods that more closely resemble distance-based transformations or dictionary-based transformations. We demonstrate, through an extensive empirical investigation, that Castor yields transformations that result in classifiers that are significantly more accurate than several state-of-the-art classifiers. In an extensive ablation study, we examine the effect of choosing hyperparameters and suggest accurate and efficient default values.
Related papers
- Learning Soft Sparse Shapes for Efficient Time-Series Classification [25.128114360717454]
We propose a Soft sparse Shapes (SoftShape) model for efficient time series classification.<n>Our approach mainly introduces soft shape sparsification and soft shape learning blocks.<n>The latter facilitates intra- and inter-shape temporal pattern learning, improving model efficiency by using sparsified soft shapes as inputs.
arXiv Detail & Related papers (2025-05-11T08:14:37Z) - Scalable Permutation-Aware Modeling for Temporal Set Prediction [8.122126170969365]
Temporal set prediction involves forecasting the elements that will appear in the next set, given a sequence of prior sets.
Existing methods often rely on intricate architectures with substantial computational overhead.
We introduce a novel and scalable framework that leverages permutation-equivariant and permutation-invariant transformations.
arXiv Detail & Related papers (2025-04-23T23:14:35Z) - Self-supervised Transformation Learning for Equivariant Representations [26.207358743969277]
Unsupervised representation learning has significantly advanced various machine learning tasks.
We propose Self-supervised Transformation Learning (STL), replacing transformation labels with transformation representations derived from image pairs.
We demonstrate the approach's effectiveness across diverse classification and detection tasks, outperforming existing methods in 7 out of 11 benchmarks.
arXiv Detail & Related papers (2025-01-15T10:54:21Z) - Diffeomorphic Transformations for Time Series Analysis: An Efficient
Approach to Nonlinear Warping [0.0]
The proliferation and ubiquity of temporal data across many disciplines has sparked interest for similarity, classification and clustering methods.
Traditional distance measures such as the Euclidean are not well-suited due to the time-dependent nature of the data.
This thesis proposes novel elastic alignment methods that use parametric & diffeomorphic warping transformations.
arXiv Detail & Related papers (2023-09-25T10:51:47Z) - General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing [5.5855074442298696]
We propose emphGeneral Lipschitz (GL), a new framework to certify neural networks against composable resolvable semantic perturbations.
Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
arXiv Detail & Related papers (2023-08-17T14:39:24Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - On the rate of convergence of a classifier based on a Transformer
encoder [55.41148606254641]
The rate of convergence of the misclassification probability of the classifier towards the optimal misclassification probability is analyzed.
It is shown that this classifier is able to circumvent the curse of dimensionality provided the aposteriori probability satisfies a suitable hierarchical composition model.
arXiv Detail & Related papers (2021-11-29T14:58:29Z) - Convolutional Shapelet Transform: A new approach for time series
shapelets [1.160208922584163]
We present a new formulation of time series shapelets including the notion of dilation, and a shapelet extraction method based on convolutional kernels.
We show that our method improves on the state-of-the-art for shapelet algorithms, and we show that it can be used to interpret results from convolutional kernels.
arXiv Detail & Related papers (2021-09-28T06:30:42Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - The Cascade Transformer: an Application for Efficient Answer Sentence
Selection [116.09532365093659]
We introduce the Cascade Transformer, a technique to adapt transformer-based models into a cascade of rankers.
When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy.
arXiv Detail & Related papers (2020-05-05T23:32:01Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z) - Use Short Isometric Shapelets to Accelerate Binary Time Series
Classification [28.469831845459183]
We introduce a novel algorithm, i.e. short isometric shapelet transform, which contains two strategies to reduce the time complexity.
The theoretical evidences of these two strategies are presented to guarantee a near-lossless accuracy under some preconditions.
arXiv Detail & Related papers (2019-12-27T04:33:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.