Monash University, UEA, UCR Time Series Extrinsic Regression Archive
- URL: http://arxiv.org/abs/2006.10996v3
- Date: Tue, 20 Oct 2020 00:38:36 GMT
- Title: Monash University, UEA, UCR Time Series Extrinsic Regression Archive
- Authors: Chang Wei Tan, Christoph Bergmeir, Francois Petitjean, Geoffrey I.
Webb
- Abstract summary: We aim to motivate and support the research into Time Series Extrinsic Regression (TSER) by introducing the first TSER benchmarking archive.
This archive contains 19 datasets from different domains, with varying number of dimensions, unequal length dimensions, and missing values.
In this paper, we introduce the datasets in this archive and did an initial benchmark on existing models.
- Score: 6.5513221781395465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series research has gathered lots of interests in the last decade,
especially for Time Series Classification (TSC) and Time Series Forecasting
(TSF). Research in TSC has greatly benefited from the University of California
Riverside and University of East Anglia (UCR/UEA) Time Series Archives. On the
other hand, the advancement in Time Series Forecasting relies on time series
forecasting competitions such as the Makridakis competitions, NN3 and NN5
Neural Network competitions, and a few Kaggle competitions. Each year,
thousands of papers proposing new algorithms for TSC and TSF have utilized
these benchmarking archives. These algorithms are designed for these specific
problems, but may not be useful for tasks such as predicting the heart rate of
a person using photoplethysmogram (PPG) and accelerometer data. We refer to
this problem as Time Series Extrinsic Regression (TSER), where we are
interested in a more general methodology of predicting a single continuous
value, from univariate or multivariate time series. This prediction can be from
the same time series or not directly related to the predictor time series and
does not necessarily need to be a future value or depend heavily on recent
values. To the best of our knowledge, research into TSER has received much less
attention in the time series research community and there are no models
developed for general time series extrinsic regression problems. Most models
are developed for a specific problem. Therefore, we aim to motivate and support
the research into TSER by introducing the first TSER benchmarking archive. This
archive contains 19 datasets from different domains, with varying number of
dimensions, unequal length dimensions, and missing values. In this paper, we
introduce the datasets in this archive and did an initial benchmark on existing
models.
Related papers
- Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - A Bag of Receptive Fields for Time Series Extrinsic Predictions [8.172425535905038]
High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.
We propose BORF, a Bag-Of-Receptive-Fields model, which incorporates notions from time series convolution and 1D-SAX.
We evaluate BORF on Time Series Classification and Time Series Extrinsic Regression tasks using the full UEA and UCR repositories.
arXiv Detail & Related papers (2023-11-29T19:13:10Z) - Temporal Treasure Hunt: Content-based Time Series Retrieval System for
Discovering Insights [34.1973242428317]
Time series data is ubiquitous across various domains such as finance, healthcare, and manufacturing.
The ability to perform Content-based Time Series Retrieval (CTSR) is crucial for identifying unknown time series examples.
We introduce a CTSR benchmark dataset that comprises time series data from a variety of domains.
arXiv Detail & Related papers (2023-11-05T04:12:13Z) - Learning Gaussian Mixture Representations for Tensor Time Series
Forecasting [8.31607451942671]
We develop a novel TTS forecasting framework, which seeks to individually model each heterogeneity component implied in the time, the location, and the source variables.
Experiment results on two real-world TTS datasets verify the superiority of our approach compared with the state-of-the-art baselines.
arXiv Detail & Related papers (2023-06-01T06:50:47Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting [10.605719154114354]
Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
arXiv Detail & Related papers (2022-05-31T15:43:46Z) - Time Series Analysis via Network Science: Concepts and Algorithms [62.997667081978825]
This review provides a comprehensive overview of existing mapping methods for transforming time series into networks.
We describe the main conceptual approaches, provide authoritative references and give insight into their advantages and limitations in a unified notation and language.
Although still very recent, this research area has much potential and with this survey we intend to pave the way for future research on the topic.
arXiv Detail & Related papers (2021-10-11T13:33:18Z) - Time Series Extrinsic Regression [6.5513221781395465]
Time Series Extrinsic Regression (TSER) is a regression task of which the aim is to learn the relationship between a time series and a continuous scalar variable.
We benchmark existing solutions and adaptations of TSC algorithms on a novel archive of 19 TSER datasets.
Our results show that the state-of-the-art TSC algorithm Rocket, when adapted for regression, achieves the highest overall accuracy.
arXiv Detail & Related papers (2020-06-23T00:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.