Split Time Series into Patches: Rethinking Long-term Series Forecasting
with Dateformer
- URL: http://arxiv.org/abs/2207.05397v1
- Date: Tue, 12 Jul 2022 08:58:44 GMT
- Title: Split Time Series into Patches: Rethinking Long-term Series Forecasting
with Dateformer
- Authors: Julong Young, Huiqiang Wang, Junhui Chen, Feihu Huang, Jian Peng
- Abstract summary: Time is one of the most significant characteristics of time-series, yet has received insufficient attention.
We propose Dateformer who turns attention to modeling time instead of following the above practice.
Dateformer yields state-of-the-art accuracy with a 40% remarkable relative improvement, and broadens the maximum credible forecasting range to a half-yearly level.
- Score: 17.454822366228335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time is one of the most significant characteristics of time-series, yet has
received insufficient attention. Prior time-series forecasting research has
mainly focused on mapping a past subseries (lookback window) to a future series
(forecast window), and time of series often just play an auxiliary role even
completely ignored in most cases. Due to the point-wise processing within these
windows, extrapolating series to longer-term future is tough in the pattern. To
overcome this barrier, we propose a brand-new time-series forecasting framework
named Dateformer who turns attention to modeling time instead of following the
above practice. Specifically, time-series are first split into patches by day
to supervise the learning of dynamic date-representations with Date Encoder
Representations from Transformers (DERT). These representations are then fed
into a simple decoder to produce a coarser (or global) prediction, and used to
help the model seek valuable information from the lookback window to learn a
refined (or local) prediction. Dateformer obtains the final result by summing
the above two parts. Our empirical studies on seven benchmarks show that the
time-modeling method is more efficient for long-term series forecasting
compared with sequence modeling methods. Dateformer yields state-of-the-art
accuracy with a 40% remarkable relative improvement, and broadens the maximum
credible forecasting range to a half-yearly level.
Related papers
- Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - MPR-Net:Multi-Scale Pattern Reproduction Guided Universality Time Series
Interpretable Forecasting [13.790498420659636]
Time series forecasting has received wide interest from existing research due to its broad applications inherent challenging.
This paper proposes a forecasting model, MPR-Net. It first adaptively decomposes multi-scale historical series patterns using convolution operation, then constructs a pattern extension forecasting method based on the prior knowledge of pattern reproduction, and finally reconstructs future patterns into future series using deconvolution operation.
By leveraging the temporal dependencies present in the time series, MPR-Net not only achieves linear time complexity, but also makes the forecasting process interpretable.
arXiv Detail & Related papers (2023-07-13T13:16:01Z) - Time Series Forecasting via Semi-Asymmetric Convolutional Architecture
with Global Atrous Sliding Window [0.0]
The proposed method in this paper is designed to address the problem of time series forecasting.
Most of modern models only focus on a short range of information, which are fatal for problems such as time series forecasting.
We make three main contributions that are experimentally verified to have performance advantages.
arXiv Detail & Related papers (2023-01-31T15:07:31Z) - VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting [10.605719154114354]
Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
arXiv Detail & Related papers (2022-05-31T15:43:46Z) - Optimal Latent Space Forecasting for Large Collections of Short Time
Series Using Temporal Matrix Factorization [0.0]
It is a common practice to evaluate multiple methods and choose one of these methods or an ensemble for producing the best forecasts.
We propose a framework for forecasting short high-dimensional time series data by combining low-rank temporal matrix factorization and optimal model selection on latent time series.
arXiv Detail & Related papers (2021-12-15T11:39:21Z) - Series Saliency: Temporal Interpretation for Multivariate Time Series
Forecasting [30.054015098590874]
We present the series saliency framework for temporal interpretation for time series forecasting.
By extracting the "series images" from the sliding windows of the time series, we apply the saliency map segmentation.
Our framework generates temporal interpretations for the time series forecasting task while produces accurate time series forecast.
arXiv Detail & Related papers (2020-12-16T23:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.