ABBA: Adaptive Brownian bridge-based symbolic aggregation of time series
- URL: http://arxiv.org/abs/2003.12469v1
- Date: Fri, 27 Mar 2020 15:30:32 GMT
- Title: ABBA: Adaptive Brownian bridge-based symbolic aggregation of time series
- Authors: Steven Elsworth and Stefan G\"uttel
- Abstract summary: A new symbolic representation of time called ABBA is introduced.
It is based on an adaptive polygonal chain approximation of the time series into a sequence of seriess.
We show that the reconstruction error of this representation can be modelled as a random walk with pinned start and end points.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new symbolic representation of time series, called ABBA, is introduced. It
is based on an adaptive polygonal chain approximation of the time series into a
sequence of tuples, followed by a mean-based clustering to obtain the symbolic
representation. We show that the reconstruction error of this representation
can be modelled as a random walk with pinned start and end points, a so-called
Brownian bridge. This insight allows us to make ABBA essentially
parameter-free, except for the approximation tolerance which must be chosen.
Extensive comparisons with the SAX and 1d-SAX representations are included in
the form of performance profiles, showing that ABBA is able to better preserve
the essential shape information of time series compared to other approaches.
Advantages and applications of ABBA are discussed, including its in-built
differencing property and use for anomaly detection, and Python implementations
provided.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - A Bag of Receptive Fields for Time Series Extrinsic Predictions [8.172425535905038]
High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.
We propose BORF, a Bag-Of-Receptive-Fields model, which incorporates notions from time series convolution and 1D-SAX.
We evaluate BORF on Time Series Classification and Time Series Extrinsic Regression tasks using the full UEA and UCR repositories.
arXiv Detail & Related papers (2023-11-29T19:13:10Z) - ASTRIDE: Adaptive Symbolization for Time Series Databases [6.8820425565516095]
We introduce ASTRIDE, a novel symbolic representation of time series, along with its accelerated variant FASTRIDE (Fast ASTRIDE)
Unlike most symbolization procedures, ASTRIDE is adaptive during both the segmentation step by performing change-point detection and the quantization step by using quantiles.
We demonstrate the performance of the ASTRIDE and FASTRIDE representations compared to SAX, 1d-SAX, SFA, and ABBA on reconstruction and, when applicable, on classification tasks.
arXiv Detail & Related papers (2023-02-08T14:46:24Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - An efficient aggregation method for the symbolic representation of
temporal data [0.0]
We present a new variant of the adaptive Brownian bridge-based aggregation (ABBA) method, called fABBA.
This variant utilizes a new aggregation approach tailored to the piecewise representation of time series.
In contrast to the original method, the new approach does not require the number of time series symbols to be specified in advance.
arXiv Detail & Related papers (2022-01-14T22:51:24Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - MrSQM: Fast Time Series Classification with Symbolic Representations [11.853438514668207]
MrSQM uses multiple symbolic representations and efficient sequence mining to extract important time series features.
We study four feature selection approaches on symbolic sequences, ranging from fully supervised, to unsupervised and hybrids.
Our experiments on 112 datasets of the UEA/UCR benchmark demonstrate that MrSQM can quickly extract useful features.
arXiv Detail & Related papers (2021-09-02T15:54:46Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.