A Bag of Receptive Fields for Time Series Extrinsic Predictions
- URL: http://arxiv.org/abs/2311.18029v1
- Date: Wed, 29 Nov 2023 19:13:10 GMT
- Title: A Bag of Receptive Fields for Time Series Extrinsic Predictions
- Authors: Francesco Spinnato and Riccardo Guidotti and Anna Monreale and Mirco
Nanni
- Abstract summary: High-dimensional time series data poses challenges due to its dynamic nature, varying lengths, and presence of missing values.
We propose BORF, a Bag-Of-Receptive-Fields model, which incorporates notions from time series convolution and 1D-SAX.
We evaluate BORF on Time Series Classification and Time Series Extrinsic Regression tasks using the full UEA and UCR repositories.
- Score: 8.172425535905038
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-dimensional time series data poses challenges due to its dynamic nature,
varying lengths, and presence of missing values. This kind of data requires
extensive preprocessing, limiting the applicability of existing Time Series
Classification and Time Series Extrinsic Regression techniques. For this
reason, we propose BORF, a Bag-Of-Receptive-Fields model, which incorporates
notions from time series convolution and 1D-SAX to handle univariate and
multivariate time series with varying lengths and missing values. We evaluate
BORF on Time Series Classification and Time Series Extrinsic Regression tasks
using the full UEA and UCR repositories, demonstrating its competitive
performance against state-of-the-art methods. Finally, we outline how this
representation can naturally provide saliency and feature-based explanations.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Don't overfit the history -- Recursive time series data augmentation [17.31522835086563]
We introduce a general framework for time series augmentation, which we call Recursive Interpolation Method, denoted as RIM.
We perform theoretical analysis to characterize the proposed RIM and to guarantee its test performance.
We apply RIM to diverse real world time series cases to achieve strong performance over non-augmented data on regression, classification, and reinforcement learning tasks.
arXiv Detail & Related papers (2022-07-06T18:09:50Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Interpretable Feature Construction for Time Series Extrinsic Regression [0.028675177318965035]
In some application domains, it occurs that the target variable is numerical and the problem is known as time series extrinsic regression (TSER)
We suggest an extension of a Bayesian method for robust and interpretable feature construction and selection in the context of TSER.
Our approach exploits a relational way to tackle with TSER: (i), we build various and simple representations of the time series which are stored in a relational data scheme, then, (ii), a propositionalisation technique is applied to build interpretable features from secondary tables to "flatten" the data.
arXiv Detail & Related papers (2021-03-15T08:12:19Z) - Monash University, UEA, UCR Time Series Extrinsic Regression Archive [6.5513221781395465]
We aim to motivate and support the research into Time Series Extrinsic Regression (TSER) by introducing the first TSER benchmarking archive.
This archive contains 19 datasets from different domains, with varying number of dimensions, unequal length dimensions, and missing values.
In this paper, we introduce the datasets in this archive and did an initial benchmark on existing models.
arXiv Detail & Related papers (2020-06-19T07:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.