Ti-MAE: Self-Supervised Masked Time Series Autoencoders
- URL: http://arxiv.org/abs/2301.08871v1
- Date: Sat, 21 Jan 2023 03:20:23 GMT
- Title: Ti-MAE: Self-Supervised Masked Time Series Autoencoders
- Authors: Zhe Li, Zhongwen Rao, Lujia Pan, Pengyun Wang, Zenglin Xu
- Abstract summary: We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
- Score: 16.98069693152999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate Time Series forecasting has been an increasingly popular topic
in various applications and scenarios. Recently, contrastive learning and
Transformer-based models have achieved good performance in many long-term
series forecasting tasks. However, there are still several issues in existing
methods. First, the training paradigm of contrastive learning and downstream
prediction tasks are inconsistent, leading to inaccurate prediction results.
Second, existing Transformer-based models which resort to similar patterns in
historical time series data for predicting future values generally induce
severe distribution shift problems, and do not fully leverage the sequence
information compared to self-supervised methods. To address these issues, we
propose a novel framework named Ti-MAE, in which the input time series are
assumed to follow an integrate distribution. In detail, Ti-MAE randomly masks
out embedded time series data and learns an autoencoder to reconstruct them at
the point-level. Ti-MAE adopts mask modeling (rather than contrastive learning)
as the auxiliary task and bridges the connection between existing
representation learning and generative Transformer-based methods, reducing the
difference between upstream and downstream forecasting tasks while maintaining
the utilization of original time series data. Experiments on several public
real-world datasets demonstrate that our framework of masked autoencoding could
learn strong representations directly from the raw data, yielding better
performance in time series forecasting and classification tasks.
Related papers
- Multi-Patch Prediction: Adapting LLMs for Time Series Representation
Learning [22.28251586213348]
aLLM4TS is an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.
A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.
arXiv Detail & Related papers (2024-02-07T13:51:26Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting [6.497816402045097]
We present an self-supervised pre-training approach based on Masked Autoencoders (MAE), called MTSMAE, which can improve the performance significantly over supervised learning without pre-training.
arXiv Detail & Related papers (2022-10-04T03:06:21Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - A Transformer-based Framework for Multivariate Time Series
Representation Learning [12.12960851087613]
Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation.
We show that our modeling approach represents the most successful method employing unsupervised learning of multivariate time series presented to date.
We demonstrate that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning.
arXiv Detail & Related papers (2020-10-06T15:14:46Z) - Deep Transformer Models for Time Series Forecasting: The Influenza
Prevalence Case [2.997238772148965]
Time series data are prevalent in many scientific and engineering disciplines.
We present a new approach to time series forecasting using Transformer-based machine learning models.
We show that the forecasting results produced by our approach are favorably comparable to the state-of-the-art.
arXiv Detail & Related papers (2020-01-23T00:22:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.