The Temporal Dictionary Ensemble (TDE) Classifier for Time Series
Classification
- URL: http://arxiv.org/abs/2105.03841v1
- Date: Sun, 9 May 2021 05:27:42 GMT
- Title: The Temporal Dictionary Ensemble (TDE) Classifier for Time Series
Classification
- Authors: Matthew Middlehurst, James Large, Gavin Cawley, Anthony Bagnall
- Abstract summary: temporal dictionary ensemble (TDE) is more accurate than other dictionary based approaches.
We show HIVE-COTE is significantly more accurate than the current best deep learning approach.
This advance represents a new state of the art for time series classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using bag of words representations of time series is a popular approach to
time series classification. These algorithms involve approximating and
discretising windows over a series to form words, then forming a count of words
over a given dictionary. Classifiers are constructed on the resulting
histograms of word counts. A 2017 evaluation of a range of time series
classifiers found the bag of symbolic-fourier approximation symbols (BOSS)
ensemble the best of the dictionary based classifiers. It forms one of the
components of hierarchical vote collective of transformation-based ensembles
(HIVE-COTE), which represents the current state of the art. Since then, several
new dictionary based algorithms have been proposed that are more accurate or
more scalable (or both) than BOSS. We propose a further extension of these
dictionary based classifiers that combines the best elements of the others
combined with a novel approach to constructing ensemble members based on an
adaptive Gaussian process model of the parameter space. We demonstrate that the
temporal dictionary ensemble (TDE) is more accurate than other dictionary based
approaches. Furthermore, unlike the other classifiers, if we replace BOSS in
HIVE-COTE with TDE, HIVE-COTE is significantly more accurate. We also show this
new version of HIVE-COTE is significantly more accurate than the current best
deep learning approach, a recently proposed hybrid tree ensemble and a recently
introduced competitive classifier making use of highly randomised convolutional
kernels. This advance represents a new state of the art for time series
classification.
Related papers
- Towards Realistic Zero-Shot Classification via Self Structural Semantic
Alignment [53.2701026843921]
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification.
In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary.
We propose the Self Structural Semantic Alignment (S3A) framework, which extracts structural semantic information from unlabeled data while simultaneously self-learning.
arXiv Detail & Related papers (2023-08-24T17:56:46Z) - Generalized Time Warping Invariant Dictionary Learning for Time Series
Classification and Clustering [8.14208923345076]
The dynamic time warping (DTW) is commonly used for dealing with temporal delays, scaling, transformation, and many other kinds of temporal misalignments issues.
We propose a generalized time warping invariant dictionary learning algorithm in this paper.
The superiority of the proposed method in terms of dictionary learning, classification, and clustering is validated through ten sets of public datasets.
arXiv Detail & Related papers (2023-06-30T14:18:13Z) - A Dictionary-based approach to Time Series Ordinal Classification [0.0]
We present an ordinal adaptation of the TDE algorithm, known as ordinal TDE (O-TDE)
Experiments conducted show the improvement achieved by the ordinal dictionary-based approach in comparison to four other existing nominal dictionary-based techniques.
arXiv Detail & Related papers (2023-05-16T08:48:36Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Efficient CNN with uncorrelated Bag of Features pooling [98.78384185493624]
Bag of Features (BoF) has been recently proposed to reduce the complexity of convolution layers.
We propose an approach that builds on top of BoF pooling to boost its efficiency by ensuring that the items of the learned dictionary are non-redundant.
The proposed strategy yields an efficient variant of BoF and further boosts its performance, without any additional parameters.
arXiv Detail & Related papers (2022-09-22T09:00:30Z) - HYDRA: Competing convolutional kernels for fast and accurate time series
classification [9.049629596156473]
We show that it is possible to move by degrees between models resembling dictionary methods and models resembling ROCKET.
We present HYDRA, a simple, fast, and accurate dictionary method for time series classification using competing convolutional kernels.
arXiv Detail & Related papers (2022-03-25T13:58:10Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - Speaker Embedding-aware Neural Diarization: a Novel Framework for
Overlapped Speech Diarization in the Meeting Scenario [51.5031673695118]
We reformulate overlapped speech diarization as a single-label prediction problem.
We propose the speaker embedding-aware neural diarization (SEND) system.
arXiv Detail & Related papers (2022-03-18T06:40:39Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.