Time Series Extrinsic Regression
- URL: http://arxiv.org/abs/2006.12672v3
- Date: Wed, 3 Feb 2021 07:01:25 GMT
- Title: Time Series Extrinsic Regression
- Authors: Chang Wei Tan, Christoph Bergmeir, Francois Petitjean, Geoffrey I.
Webb
- Abstract summary: Time Series Extrinsic Regression (TSER) is a regression task of which the aim is to learn the relationship between a time series and a continuous scalar variable.
We benchmark existing solutions and adaptations of TSC algorithms on a novel archive of 19 TSER datasets.
Our results show that the state-of-the-art TSC algorithm Rocket, when adapted for regression, achieves the highest overall accuracy.
- Score: 6.5513221781395465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies Time Series Extrinsic Regression (TSER): a regression task
of which the aim is to learn the relationship between a time series and a
continuous scalar variable; a task closely related to time series
classification (TSC), which aims to learn the relationship between a time
series and a categorical class label. This task generalizes time series
forecasting (TSF), relaxing the requirement that the value predicted be a
future value of the input series or primarily depend on more recent values.
In this paper, we motivate and study this task, and benchmark existing
solutions and adaptations of TSC algorithms on a novel archive of 19 TSER
datasets which we have assembled. Our results show that the state-of-the-art
TSC algorithm Rocket, when adapted for regression, achieves the highest overall
accuracy compared to adaptations of other TSC algorithms and state-of-the-art
machine learning (ML) algorithms such as XGBoost, Random Forest and Support
Vector Regression. More importantly, we show that much research is needed in
this field to improve the accuracy of ML models. We also find evidence that
further research has excellent prospects of improving upon these
straightforward baselines.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Unsupervised Feature Based Algorithms for Time Series Extrinsic
Regression [0.9659642285903419]
Time Series Extrinsic Regression (TSER) involves using a set of training time series to form a predictive model of a continuous response variable.
DrCIF and FreshPRINCE models are the only ones that significantly outperform the standard rotation forest regressor.
arXiv Detail & Related papers (2023-05-02T13:58:20Z) - The FreshPRINCE: A Simple Transformation Based Pipeline Time Series
Classifier [0.0]
We look at whether the complexity of the algorithms considered state of the art is really necessary.
Many times the first approach suggested is a simple pipeline of summary statistics or other time series feature extraction approaches.
We test these approaches on the UCR time series dataset archive, looking to see if TSC literature has overlooked the effectiveness of these approaches.
arXiv Detail & Related papers (2022-01-28T11:23:58Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Interpretable Feature Construction for Time Series Extrinsic Regression [0.028675177318965035]
In some application domains, it occurs that the target variable is numerical and the problem is known as time series extrinsic regression (TSER)
We suggest an extension of a Bayesian method for robust and interpretable feature construction and selection in the context of TSER.
Our approach exploits a relational way to tackle with TSER: (i), we build various and simple representations of the time series which are stored in a relational data scheme, then, (ii), a propositionalisation technique is applied to build interpretable features from secondary tables to "flatten" the data.
arXiv Detail & Related papers (2021-03-15T08:12:19Z) - Monash University, UEA, UCR Time Series Extrinsic Regression Archive [6.5513221781395465]
We aim to motivate and support the research into Time Series Extrinsic Regression (TSER) by introducing the first TSER benchmarking archive.
This archive contains 19 datasets from different domains, with varying number of dimensions, unequal length dimensions, and missing values.
In this paper, we introduce the datasets in this archive and did an initial benchmark on existing models.
arXiv Detail & Related papers (2020-06-19T07:47:57Z) - Interpretable Time Series Classification using Linear Models and
Multi-resolution Multi-domain Symbolic Representations [6.6147550436077776]
We propose new time series classification algorithms to address gaps in current approaches.
Our approach is based on symbolic representations of time series, efficient sequence mining algorithms and linear classification models.
Our models are as accurate as deep learning models but are more efficient regarding running time and memory, can work with variable-length time series and can be interpreted by highlighting the discriminative symbolic features on the original time series.
arXiv Detail & Related papers (2020-05-31T15:32:08Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.