Revealing the Power of Spatial-Temporal Masked Autoencoders in
Multivariate Time Series Forecasting
- URL: http://arxiv.org/abs/2309.15169v1
- Date: Tue, 26 Sep 2023 18:05:19 GMT
- Title: Revealing the Power of Spatial-Temporal Masked Autoencoders in
Multivariate Time Series Forecasting
- Authors: Jiarui Sun, Yujie Fan, Chin-Chia Michael Yeh, Wei Zhang, Girish
Chowdhary
- Abstract summary: We propose an MTS forecasting framework that leverages masked autoencoders to enhance the performance of spatial-temporal baseline models.
In the pretraining stage, an encoder-decoder architecture is employed to process partially visible MTS data.
In the fine-tuning stage, the encoder is retained, and the original decoder from existing spatial-temporal models is appended for forecasting.
- Score: 17.911251232225094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multivariate time series (MTS) forecasting involves predicting future time
series data based on historical observations. Existing research primarily
emphasizes the development of complex spatial-temporal models that capture
spatial dependencies and temporal correlations among time series variables
explicitly. However, recent advances have been impeded by challenges relating
to data scarcity and model robustness. To address these issues, we propose
Spatial-Temporal Masked Autoencoders (STMAE), an MTS forecasting framework that
leverages masked autoencoders to enhance the performance of spatial-temporal
baseline models. STMAE consists of two learning stages. In the pretraining
stage, an encoder-decoder architecture is employed. The encoder processes the
partially visible MTS data produced by a novel dual-masking strategy, including
biased random walk-based spatial masking and patch-based temporal masking.
Subsequently, the decoders aim to reconstruct the masked counterparts from both
spatial and temporal perspectives. The pretraining stage establishes a
challenging pretext task, compelling the encoder to learn robust
spatial-temporal patterns. In the fine-tuning stage, the pretrained encoder is
retained, and the original decoder from existing spatial-temporal models is
appended for forecasting. Extensive experiments are conducted on multiple MTS
benchmarks. The promising results demonstrate that integrating STMAE into
various spatial-temporal models can largely enhance their MTS forecasting
capability.
Related papers
- Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - HiMTM: Hierarchical Multi-Scale Masked Time Series Modeling for
Long-Term Forecasting [18.59792043113792]
HiMTM is a hierarchical multi-scale masked time series modeling method designed for long-term forecasting.
It comprises four integral components: (1) hierarchical multi-scale transformer (HMT) to capture temporal information at different scales; (2) decoupled encoder-decoder (DED) forces the encoder to focus on feature extraction; while the decoder to focus on pretext tasks.
We conduct extensive experiments on 7 mainstream datasets to prove that HiMTM has obvious advantages over contemporary self-supervised and end-to-end learning methods.
arXiv Detail & Related papers (2024-01-10T09:00:03Z) - Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting [15.446085872077898]
We propose a self-supervised pre-training framework that employs two decoupled masked autoencoders to reconstruct totemporal series along the spatial and temporal dimensions.
Rich-context representations learned through such reconstruction could be seamlessly integrated by downstream with predictors arbitrary architectures to augment their performances.
arXiv Detail & Related papers (2023-12-01T11:43:49Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling [82.69579113377192]
SimMTM is a simple pre-training framework for Masked Time-series Modeling.
SimMTM recovers masked time points by the weighted aggregation of multiple neighbors outside the manifold.
SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods.
arXiv Detail & Related papers (2023-02-02T04:12:29Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond [2.323220706791067]
It is challenging to predict sequence varies both in time and space.
The mainstream method is to model and spatial temporal structures at the same time.
A modular design is proposed, which embeds sequence model into two modules: a spatial encoder-decoder and a predictor.
arXiv Detail & Related papers (2022-10-04T10:09:35Z) - SatMAE: Pre-training Transformers for Temporal and Multi-Spectral
Satellite Imagery [74.82821342249039]
We present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE)
To leverage temporal information, we include a temporal embedding along with independently masking image patches across time.
arXiv Detail & Related papers (2022-07-17T01:35:29Z) - Time Series Generation with Masked Autoencoder [0.0]
Masked autoencoders with interpolators (InterpoMAE) are scalable self-supervised generators for time series.
InterpoMAE uses an interpolator rather than mask tokens to restore the latent representations for missing patches in the latent space.
arXiv Detail & Related papers (2022-01-14T08:11:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.