Interpretable Time Series Classification using Linear Models and
Multi-resolution Multi-domain Symbolic Representations
- URL: http://arxiv.org/abs/2006.01667v1
- Date: Sun, 31 May 2020 15:32:08 GMT
- Title: Interpretable Time Series Classification using Linear Models and
Multi-resolution Multi-domain Symbolic Representations
- Authors: Thach Le Nguyen and Severin Gsponer and Iulia Ilie and Martin O'Reilly
and Georgiana Ifrim
- Abstract summary: We propose new time series classification algorithms to address gaps in current approaches.
Our approach is based on symbolic representations of time series, efficient sequence mining algorithms and linear classification models.
Our models are as accurate as deep learning models but are more efficient regarding running time and memory, can work with variable-length time series and can be interpreted by highlighting the discriminative symbolic features on the original time series.
- Score: 6.6147550436077776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The time series classification literature has expanded rapidly over the last
decade, with many new classification approaches published each year. Prior
research has mostly focused on improving the accuracy and efficiency of
classifiers, with interpretability being somewhat neglected. This aspect of
classifiers has become critical for many application domains and the
introduction of the EU GDPR legislation in 2018 is likely to further emphasize
the importance of interpretable learning algorithms. Currently,
state-of-the-art classification accuracy is achieved with very complex models
based on large ensembles (COTE) or deep neural networks (FCN). These approaches
are not efficient with regard to either time or space, are difficult to
interpret and cannot be applied to variable-length time series, requiring
pre-processing of the original series to a set fixed-length. In this paper we
propose new time series classification algorithms to address these gaps. Our
approach is based on symbolic representations of time series, efficient
sequence mining algorithms and linear classification models. Our linear models
are as accurate as deep learning models but are more efficient regarding
running time and memory, can work with variable-length time series and can be
interpreted by highlighting the discriminative symbolic features on the
original time series. We show that our multi-resolution multi-domain linear
classifier (mtSS-SEQL+LR) achieves a similar accuracy to the state-of-the-art
COTE ensemble, and to recent deep learning methods (FCN, ResNet), but uses a
fraction of the time and memory required by either COTE or deep models. To
further analyse the interpretability of our classifier, we present a case study
on a human motion dataset collected by the authors. We release all the results,
source code and data to encourage reproducibility.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Back to Basics: A Sanity Check on Modern Time Series Classification
Algorithms [5.225544155289783]
In the current fast-paced development of new classifiers, taking a step back and performing simple baseline checks is essential.
These checks are often overlooked, as researchers are focused on establishing new state-of-the-art results, developing scalable algorithms, and making models explainable.
arXiv Detail & Related papers (2023-08-15T17:23:18Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Deep Generative model with Hierarchical Latent Factors for Time Series
Anomaly Detection [40.21502451136054]
This work presents DGHL, a new family of generative models for time series anomaly detection.
A top-down Convolution Network maps a novel hierarchical latent space to time series windows, exploiting temporal dynamics to encode information efficiently.
Our method outperformed current state-of-the-art models on four popular benchmark datasets.
arXiv Detail & Related papers (2022-02-15T17:19:44Z) - Robust Augmentation for Multivariate Time Series Classification [20.38907456958682]
We show that the simple methods of cutout, cutmix, mixup, and window warp improve the robustness and overall performance.
We show that the InceptionTime network with augmentation improves accuracy by 1% to 45% in 18 different datasets.
arXiv Detail & Related papers (2022-01-27T18:57:49Z) - Mimic: An adaptive algorithm for multivariate time series classification [11.49627617337276]
Time series data are valuable but are often inscrutable.
Gaining trust in time series classifiers for finance, healthcare, and other critical applications may rely on creating interpretable models.
We propose a novel Mimic algorithm that retains the predictive accuracy of the strongest classifiers while introducing interpretability.
arXiv Detail & Related papers (2021-11-08T04:47:31Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Multi-Time Attention Networks for Irregularly Sampled Time Series [18.224344440110862]
Irregular sampling occurs in many time series modeling applications.
We propose a new deep learning framework for this setting that we call Multi-Time Attention Networks.
Our results show that our approach performs as well or better than a range of baseline and recently proposed models.
arXiv Detail & Related papers (2021-01-25T18:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.