COSTI: a New Classifier for Sequences of Temporal Intervals
- URL: http://arxiv.org/abs/2204.13467v1
- Date: Thu, 28 Apr 2022 12:55:06 GMT
- Title: COSTI: a New Classifier for Sequences of Temporal Intervals
- Authors: Jakub Micha{\l} Bilski and Agnieszka Jastrz\k{e}bska
- Abstract summary: We develop a novel method for classification operating directly on sequences of temporal intervals.
The proposed method remains at a high level of accuracy and obtains better performance while avoiding shortcomings connected to operating on transformed data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification of sequences of temporal intervals is a part of time series
analysis which concerns series of events. We propose a new method of
transforming the problem to a task of multivariate series classification. We
use one of the state-of-the-art algorithms from the latter domain on the new
representation to obtain significantly better accuracy than the
state-of-the-art methods from the former field. We discuss limitations of this
workflow and address them by developing a novel method for classification
termed COSTI (short for Classification of Sequences of Temporal Intervals)
operating directly on sequences of temporal intervals. The proposed method
remains at a high level of accuracy and obtains better performance while
avoiding shortcomings connected to operating on transformed data. We propose a
generalized version of the problem of classification of temporal intervals,
where each event is supplemented with information about its intensity. We also
provide two new data sets where this information is of substantial value.
Related papers
- An End-to-End Model for Time Series Classification In the Presence of Missing Values [25.129396459385873]
Time series classification with missing data is a prevalent issue in time series analysis.
This study proposes an end-to-end neural network that unifies data imputation and representation learning within a single framework.
arXiv Detail & Related papers (2024-08-11T19:39:12Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Robust Explainer Recommendation for Time Series Classification [4.817429789586127]
Time series classification is a task common in domains such as human activity recognition, sports analytics and general sensing.
Recently, a great variety of techniques have been proposed and adapted for time series to provide explanation in the form of saliency maps.
This paper provides a novel framework to quantitatively evaluate and rank explanation methods for time series classification.
arXiv Detail & Related papers (2023-06-08T18:49:23Z) - Early Time-Series Classification Algorithms: An Empirical Comparison [59.82930053437851]
Early Time-Series Classification (ETSC) is the task of predicting the class of incoming time-series by observing as few measurements as possible.
We evaluate six existing ETSC algorithms on publicly available data, as well as on two newly introduced datasets.
arXiv Detail & Related papers (2022-03-03T10:43:56Z) - The FreshPRINCE: A Simple Transformation Based Pipeline Time Series
Classifier [0.0]
We look at whether the complexity of the algorithms considered state of the art is really necessary.
Many times the first approach suggested is a simple pipeline of summary statistics or other time series feature extraction approaches.
We test these approaches on the UCR time series dataset archive, looking to see if TSC literature has overlooked the effectiveness of these approaches.
arXiv Detail & Related papers (2022-01-28T11:23:58Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - Change Point Detection in Time Series Data using Autoencoders with a
Time-Invariant Representation [69.34035527763916]
Change point detection (CPD) aims to locate abrupt property changes in time series data.
Recent CPD methods demonstrated the potential of using deep learning techniques, but often lack the ability to identify more subtle changes in the autocorrelation statistics of the signal.
We employ an autoencoder-based methodology with a novel loss function, through which the used autoencoders learn a partially time-invariant representation that is tailored for CPD.
arXiv Detail & Related papers (2020-08-21T15:03:21Z) - The Canonical Interval Forest (CIF) Classifier for Time Series
Classification [0.0]
Time series forest (TSF) is one of the most well known interval methods.
We propose combining TSF and catch22 to form a new classifier, the Canonical Interval Forest (CIF)
We demonstrate a large and significant improvement in accuracy over both TSF and catch22, and show it to be on par with top performers from other algorithmic classes.
arXiv Detail & Related papers (2020-08-20T19:26:24Z) - Interpretable Time Series Classification using Linear Models and
Multi-resolution Multi-domain Symbolic Representations [6.6147550436077776]
We propose new time series classification algorithms to address gaps in current approaches.
Our approach is based on symbolic representations of time series, efficient sequence mining algorithms and linear classification models.
Our models are as accurate as deep learning models but are more efficient regarding running time and memory, can work with variable-length time series and can be interpreted by highlighting the discriminative symbolic features on the original time series.
arXiv Detail & Related papers (2020-05-31T15:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.