Unsupervised Feature Based Algorithms for Time Series Extrinsic
Regression
- URL: http://arxiv.org/abs/2305.01429v1
- Date: Tue, 2 May 2023 13:58:20 GMT
- Title: Unsupervised Feature Based Algorithms for Time Series Extrinsic
Regression
- Authors: David Guijo-Rubio, Matthew Middlehurst, Guilherme Arcencio, Diego
Furtado Silva, Anthony Bagnall
- Abstract summary: Time Series Extrinsic Regression (TSER) involves using a set of training time series to form a predictive model of a continuous response variable.
DrCIF and FreshPRINCE models are the only ones that significantly outperform the standard rotation forest regressor.
- Score: 0.9659642285903419
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time Series Extrinsic Regression (TSER) involves using a set of training time
series to form a predictive model of a continuous response variable that is not
directly related to the regressor series. The TSER archive for comparing
algorithms was released in 2022 with 19 problems. We increase the size of this
archive to 63 problems and reproduce the previous comparison of baseline
algorithms. We then extend the comparison to include a wider range of standard
regressors and the latest versions of TSER models used in the previous study.
We show that none of the previously evaluated regressors can outperform a
regression adaptation of a standard classifier, rotation forest. We introduce
two new TSER algorithms developed from related work in time series
classification. FreshPRINCE is a pipeline estimator consisting of a transform
into a wide range of summary features followed by a rotation forest regressor.
DrCIF is a tree ensemble that creates features from summary statistics over
random intervals. Our study demonstrates that both algorithms, along with
InceptionTime, exhibit significantly better performance compared to the other
18 regressors tested. More importantly, these two proposals (DrCIF and
FreshPRINCE) models are the only ones that significantly outperform the
standard rotation forest regressor.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Don't overfit the history -- Recursive time series data augmentation [17.31522835086563]
We introduce a general framework for time series augmentation, which we call Recursive Interpolation Method, denoted as RIM.
We perform theoretical analysis to characterize the proposed RIM and to guarantee its test performance.
We apply RIM to diverse real world time series cases to achieve strong performance over non-augmented data on regression, classification, and reinforcement learning tasks.
arXiv Detail & Related papers (2022-07-06T18:09:50Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Deep Generative model with Hierarchical Latent Factors for Time Series
Anomaly Detection [40.21502451136054]
This work presents DGHL, a new family of generative models for time series anomaly detection.
A top-down Convolution Network maps a novel hierarchical latent space to time series windows, exploiting temporal dynamics to encode information efficiently.
Our method outperformed current state-of-the-art models on four popular benchmark datasets.
arXiv Detail & Related papers (2022-02-15T17:19:44Z) - The FreshPRINCE: A Simple Transformation Based Pipeline Time Series
Classifier [0.0]
We look at whether the complexity of the algorithms considered state of the art is really necessary.
Many times the first approach suggested is a simple pipeline of summary statistics or other time series feature extraction approaches.
We test these approaches on the UCR time series dataset archive, looking to see if TSC literature has overlooked the effectiveness of these approaches.
arXiv Detail & Related papers (2022-01-28T11:23:58Z) - Interpretable Feature Construction for Time Series Extrinsic Regression [0.028675177318965035]
In some application domains, it occurs that the target variable is numerical and the problem is known as time series extrinsic regression (TSER)
We suggest an extension of a Bayesian method for robust and interpretable feature construction and selection in the context of TSER.
Our approach exploits a relational way to tackle with TSER: (i), we build various and simple representations of the time series which are stored in a relational data scheme, then, (ii), a propositionalisation technique is applied to build interpretable features from secondary tables to "flatten" the data.
arXiv Detail & Related papers (2021-03-15T08:12:19Z) - Time Series Extrinsic Regression [6.5513221781395465]
Time Series Extrinsic Regression (TSER) is a regression task of which the aim is to learn the relationship between a time series and a continuous scalar variable.
We benchmark existing solutions and adaptations of TSC algorithms on a novel archive of 19 TSER datasets.
Our results show that the state-of-the-art TSC algorithm Rocket, when adapted for regression, achieves the highest overall accuracy.
arXiv Detail & Related papers (2020-06-23T00:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.