Time Series Generation with Masked Autoencoder
- URL: http://arxiv.org/abs/2201.07006v1
- Date: Fri, 14 Jan 2022 08:11:09 GMT
- Title: Time Series Generation with Masked Autoencoder
- Authors: Mengyue Zha
- Abstract summary: Masked autoencoders with interpolators (InterpoMAE) are scalable self-supervised generators for time series.
InterpoMAE uses an interpolator rather than mask tokens to restore the latent representations for missing patches in the latent space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper shows that masked autoencoders with interpolators (InterpoMAE) are
scalable self-supervised generators for time series. InterpoMAE masks random
patches from the input time series and restore the missing patches in the
latent space by an interpolator. The core design is that InterpoMAE uses an
interpolator rather than mask tokens to restore the latent representations for
missing patches in the latent space. This design enables more efficient and
effective capture of temporal dynamics with bidirectional information.
InterpoMAE allows for explicit control on the diversity of synthetic data by
changing the size and number of masked patches. Our approach consistently and
significantly outperforms state-of-the-art (SoTA) benchmarks of unsupervised
learning in time series generation on several real datasets. Synthetic data
produced show promising scaling behavior in various downstream tasks such as
data augmentation, imputation and denoise.
Related papers
- ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - Revealing the Power of Spatial-Temporal Masked Autoencoders in
Multivariate Time Series Forecasting [17.911251232225094]
We propose an MTS forecasting framework that leverages masked autoencoders to enhance the performance of spatial-temporal baseline models.
In the pretraining stage, an encoder-decoder architecture is employed to process partially visible MTS data.
In the fine-tuning stage, the encoder is retained, and the original decoder from existing spatial-temporal models is appended for forecasting.
arXiv Detail & Related papers (2023-09-26T18:05:19Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Self-supervised Transformer for Multivariate Clinical Time-Series with
Missing Values [7.9405251142099464]
We present STraTS (Self-supervised Transformer for TimeSeries) model.
It treats time-series as a set of observation triplets instead of using the traditional dense matrix representation.
It shows better prediction performance than state-of-theart methods for mortality prediction, especially when labeled data is limited.
arXiv Detail & Related papers (2021-07-29T19:39:39Z) - Explaining Time Series Predictions with Dynamic Masks [91.3755431537592]
We propose dynamic masks (Dynamask) to explain predictions of a machine learning model.
With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time.
The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance.
arXiv Detail & Related papers (2021-06-09T18:01:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.