MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting
- URL: http://arxiv.org/abs/2210.02199v1
- Date: Tue, 4 Oct 2022 03:06:21 GMT
- Title: MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting
- Authors: Peiwang Tang and Xianchao Zhang
- Abstract summary: We present an self-supervised pre-training approach based on Masked Autoencoders (MAE), called MTSMAE, which can improve the performance significantly over supervised learning without pre-training.
- Score: 6.497816402045097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale self-supervised pre-training Transformer architecture have
significantly boosted the performance for various tasks in natural language
processing (NLP) and computer vision (CV). However, there is a lack of
researches on processing multivariate time-series by pre-trained Transformer,
and especially, current study on masking time-series for self-supervised
learning is still a gap. Different from language and image processing, the
information density of time-series increases the difficulty of research. The
challenge goes further with the invalidity of the previous patch embedding and
mask methods. In this paper, according to the data characteristics of
multivariate time-series, a patch embedding method is proposed, and we present
an self-supervised pre-training approach based on Masked Autoencoders (MAE),
called MTSMAE, which can improve the performance significantly over supervised
learning without pre-training. Evaluating our method on several common
multivariate time-series datasets from different fields and with different
characteristics, experiment results demonstrate that the performance of our
method is significantly better than the best method currently available.
Related papers
- MTSCI: A Conditional Diffusion Model for Multivariate Time Series Consistent Imputation [41.681869408967586]
Key research question is how to ensure imputation consistency, i.e., intra-consistency between observed and imputed values.
Previous methods rely solely on the inductive bias of the imputation targets to guide the learning process.
arXiv Detail & Related papers (2024-08-11T10:24:53Z) - Multi-Patch Prediction: Adapting LLMs for Time Series Representation
Learning [22.28251586213348]
aLLM4TS is an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.
A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.
arXiv Detail & Related papers (2024-02-07T13:51:26Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Enhancing Transformer Efficiency for Multivariate Time Series
Classification [12.128991867050487]
We propose a methodology to investigate the relationship between model efficiency and accuracy, as well as its complexity.
Comprehensive experiments on benchmark MTS datasets illustrate the effectiveness of our method.
arXiv Detail & Related papers (2022-03-28T03:25:19Z) - A Transformer-based Framework for Multivariate Time Series
Representation Learning [12.12960851087613]
Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation.
We show that our modeling approach represents the most successful method employing unsupervised learning of multivariate time series presented to date.
We demonstrate that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning.
arXiv Detail & Related papers (2020-10-06T15:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.