Time Series Generation with Masked Autoencoder
- URL: http://arxiv.org/abs/2201.07006v1
- Date: Fri, 14 Jan 2022 08:11:09 GMT
- Title: Time Series Generation with Masked Autoencoder
- Authors: Mengyue Zha
- Abstract summary: Masked autoencoders with interpolators (InterpoMAE) are scalable self-supervised generators for time series.
InterpoMAE uses an interpolator rather than mask tokens to restore the latent representations for missing patches in the latent space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper shows that masked autoencoders with interpolators (InterpoMAE) are
scalable self-supervised generators for time series. InterpoMAE masks random
patches from the input time series and restore the missing patches in the
latent space by an interpolator. The core design is that InterpoMAE uses an
interpolator rather than mask tokens to restore the latent representations for
missing patches in the latent space. This design enables more efficient and
effective capture of temporal dynamics with bidirectional information.
InterpoMAE allows for explicit control on the diversity of synthetic data by
changing the size and number of masked patches. Our approach consistently and
significantly outperforms state-of-the-art (SoTA) benchmarks of unsupervised
learning in time series generation on several real datasets. Synthetic data
produced show promising scaling behavior in various downstream tasks such as
data augmentation, imputation and denoise.
Related papers
- Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer for Stock Time Series Forecasting [1.864621482724548]
We propose a Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer (DPA-STIFormer) to more comprehensively extract dynamic spatial information from stock data.
Experiments conducted on four stock market datasets demonstrate state-of-the-art results, validating the model's superior capability in uncovering latent temporal-correlation patterns.
arXiv Detail & Related papers (2024-09-24T01:53:22Z) - PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Self-supervised Transformer for Multivariate Clinical Time-Series with
Missing Values [7.9405251142099464]
We present STraTS (Self-supervised Transformer for TimeSeries) model.
It treats time-series as a set of observation triplets instead of using the traditional dense matrix representation.
It shows better prediction performance than state-of-theart methods for mortality prediction, especially when labeled data is limited.
arXiv Detail & Related papers (2021-07-29T19:39:39Z) - Explaining Time Series Predictions with Dynamic Masks [91.3755431537592]
We propose dynamic masks (Dynamask) to explain predictions of a machine learning model.
With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time.
The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance.
arXiv Detail & Related papers (2021-06-09T18:01:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.