HYDRA: Competing convolutional kernels for fast and accurate time series
classification
- URL: http://arxiv.org/abs/2203.13652v1
- Date: Fri, 25 Mar 2022 13:58:10 GMT
- Title: HYDRA: Competing convolutional kernels for fast and accurate time series
classification
- Authors: Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb
- Abstract summary: We show that it is possible to move by degrees between models resembling dictionary methods and models resembling ROCKET.
We present HYDRA, a simple, fast, and accurate dictionary method for time series classification using competing convolutional kernels.
- Score: 9.049629596156473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate a simple connection between dictionary methods for time series
classification, which involve extracting and counting symbolic patterns in time
series, and methods based on transforming input time series using convolutional
kernels, namely ROCKET and its variants. We show that by adjusting a single
hyperparameter it is possible to move by degrees between models resembling
dictionary methods and models resembling ROCKET. We present HYDRA, a simple,
fast, and accurate dictionary method for time series classification using
competing convolutional kernels, combining key aspects of both ROCKET and
conventional dictionary methods. HYDRA is faster and more accurate than the
most accurate existing dictionary methods, and can be combined with ROCKET and
its variants to further improve the accuracy of these methods.
Related papers
- Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT [59.245414547751636]
We propose a circuit discovery framework alternative to activation patching.
Our framework suffers less from out-of-distribution and proves to be more efficient in terms of complexity.
We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
arXiv Detail & Related papers (2024-02-19T15:04:53Z) - Generalized Time Warping Invariant Dictionary Learning for Time Series
Classification and Clustering [8.14208923345076]
The dynamic time warping (DTW) is commonly used for dealing with temporal delays, scaling, transformation, and many other kinds of temporal misalignments issues.
We propose a generalized time warping invariant dictionary learning algorithm in this paper.
The superiority of the proposed method in terms of dictionary learning, classification, and clustering is validated through ten sets of public datasets.
arXiv Detail & Related papers (2023-06-30T14:18:13Z) - LRANet: Towards Accurate and Efficient Scene Text Detection with
Low-Rank Approximation Network [63.554061288184165]
We propose a novel parameterized text shape method based on low-rank approximation.
By exploring the shape correlation among different text contours, our method achieves consistency, compactness, simplicity, and robustness in shape representation.
We implement an accurate and efficient arbitrary-shaped text detector named LRANet.
arXiv Detail & Related papers (2023-06-27T02:03:46Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Dictionary Learning with Convex Update (ROMD) [6.367823813868024]
We propose a new type of dictionary learning algorithm called ROMD.
ROMD updates the whole dictionary at a time using convex matrices.
The advantages hence include both guarantees for dictionary update and faster of the whole dictionary learning.
arXiv Detail & Related papers (2021-10-13T11:14:38Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Hash Layers For Large Sparse Models [48.90784451703753]
We modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence.
We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods.
arXiv Detail & Related papers (2021-06-08T14:54:24Z) - The Temporal Dictionary Ensemble (TDE) Classifier for Time Series
Classification [0.0]
temporal dictionary ensemble (TDE) is more accurate than other dictionary based approaches.
We show HIVE-COTE is significantly more accurate than the current best deep learning approach.
This advance represents a new state of the art for time series classification.
arXiv Detail & Related papers (2021-05-09T05:27:42Z) - Exact Sparse Orthogonal Dictionary Learning [8.577876545575828]
We find that our method can result in better denoising results than over-complete dictionary based learning methods.
Our method has the additional advantage of high efficiency.
arXiv Detail & Related papers (2021-03-14T07:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.