Time-series Transformer Generative Adversarial Networks
- URL: http://arxiv.org/abs/2205.11164v1
- Date: Mon, 23 May 2022 10:04:21 GMT
- Title: Time-series Transformer Generative Adversarial Networks
- Authors: Padmanaba Srinivasan, William J. Knottenbelt
- Abstract summary: We consider limitations posed specifically on time-series data and present a model that can generate synthetic time-series.
A model that generates synthetic time-series data has two objectives: 1) to capture the stepwise conditional distribution of real sequences, and 2) to faithfully model the joint distribution of entire real sequences.
We present TsT-GAN, a framework that capitalises on the Transformer architecture to satisfy the desiderata and compare its performance against five state-of-the-art models on five datasets.
- Score: 5.254093731341154
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many real-world tasks are plagued by limitations on data: in some instances
very little data is available and in others, data is protected by privacy
enforcing regulations (e.g. GDPR). We consider limitations posed specifically
on time-series data and present a model that can generate synthetic time-series
which can be used in place of real data. A model that generates synthetic
time-series data has two objectives: 1) to capture the stepwise conditional
distribution of real sequences, and 2) to faithfully model the joint
distribution of entire real sequences. Autoregressive models trained via
maximum likelihood estimation can be used in a system where previous
predictions are fed back in and used to predict future ones; in such models,
errors can accrue over time. Furthermore, a plausible initial value is required
making MLE based models not really generative. Many downstream tasks learn to
model conditional distributions of the time-series, hence, synthetic data drawn
from a generative model must satisfy 1) in addition to performing 2). We
present TsT-GAN, a framework that capitalises on the Transformer architecture
to satisfy the desiderata and compare its performance against five
state-of-the-art models on five datasets and show that TsT-GAN achieves higher
predictive performance on all datasets.
Related papers
- sTransformer: A Modular Approach for Extracting Inter-Sequential and Temporal Information for Time-Series Forecasting [6.434378359932152]
We review and categorize existing Transformer-based models into two main types: (1) modifications to the model structure and (2) modifications to the input data.
We propose $textbfsTransformer$, which introduces the Sequence and Temporal Convolutional Network (STCN) to fully capture both sequential and temporal information.
We compare our model with linear models and existing forecasting models on long-term time-series forecasting, achieving new state-of-the-art results.
arXiv Detail & Related papers (2024-08-19T06:23:41Z) - TimeAutoDiff: Combining Autoencoder and Diffusion model for time series tabular data synthesizing [13.385264002435145]
In this paper, we leverage the power of latent diffusion models to generate synthetic time series tabular data.
We tackle this problem by combining the ideas of the variational auto-encoder (VAE) and the denoising diffusion probabilistic model (DDPM)
Our model named as textttTimeAutoDiff has several key advantages including (1) Generality: the ability to handle the broad spectrum of time series data from single to multi-sequence datasets.
arXiv Detail & Related papers (2024-06-23T06:32:27Z) - Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - On the Stability of Iterative Retraining of Generative Models on their own Data [56.153542044045224]
We study the impact of training generative models on mixed datasets.
We first prove the stability of iterative training under the condition that the initial generative models approximate the data distribution well enough.
We empirically validate our theory on both synthetic and natural images by iteratively training normalizing flows and state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-09-30T16:41:04Z) - Deep Latent State Space Models for Time-Series Generation [68.45746489575032]
We propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE.
Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4.
We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets.
arXiv Detail & Related papers (2022-12-24T15:17:42Z) - Transformer-based conditional generative adversarial network for
multivariate time series generation [0.0]
Conditional generation of time-dependent data is a task that has much interest.
Recent works proposed a Transformer-based Time series generative adversarial network (TTS-GAN)
We extend the TTS-GAN by conditioning its generated output on a particular encoded context.
We show that this transformer-based CGAN can generate realistic high-dimensional and long data sequences under different kinds of conditions.
arXiv Detail & Related papers (2022-10-05T08:29:33Z) - TTS-CGAN: A Transformer Time-Series Conditional GAN for Biosignal Data
Augmentation [5.607676459156789]
We present TTS-CGAN, a conditional GAN model that can be trained on existing multi-class datasets and generate class-specific synthetic time-series sequences.
Synthetic sequences generated by our model are indistinguishable from real ones, and can be used to complement or replace real signals of the same type.
arXiv Detail & Related papers (2022-06-28T01:01:34Z) - SeDyT: A General Framework for Multi-Step Event Forecasting via Sequence
Modeling on Dynamic Entity Embeddings [6.314274045636102]
Event forecasting is a critical and challenging task in Temporal Knowledge Graph reasoning.
We propose SeDyT, a discriminative framework that performs sequence modeling on the dynamic entity embeddings.
By combining temporal Graph Neural Network models and sequence models, SeDyT achieves an average of 2.4% MRR improvement.
arXiv Detail & Related papers (2021-09-09T20:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.