TS-Diffusion: Generating Highly Complex Time Series with Diffusion
Models
- URL: http://arxiv.org/abs/2311.03303v1
- Date: Mon, 6 Nov 2023 17:52:08 GMT
- Title: TS-Diffusion: Generating Highly Complex Time Series with Diffusion
Models
- Authors: Yangming Li
- Abstract summary: We consider a class of time series with three common bad properties, including sampling irregularities, missingness, and large feature-temporal dimensions.
We introduce a general model, TS-Diffusion, to process such complex time series.
We have conducted extensive experiments on multiple time-series datasets, demonstrating that TS-Diffusion achieves excellent results on both conventional and complex time series.
- Score: 12.646560434352478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current generative models have achieved promising performances in
time-series synthesis, they either make strong assumptions on the data format
(e.g., regularities) or rely on pre-processing approaches (e.g.,
interpolations) to simplify the raw data. In this work, we consider a class of
time series with three common bad properties, including sampling
irregularities, missingness, and large feature-temporal dimensions, and
introduce a general model, TS-Diffusion, to process such complex time series.
Our model consists of three parts under the framework of point process. The
first part is an encoder of the neural ordinary differential equation (ODE)
that converts time series into dense representations, with the jump technique
to capture sampling irregularities and self-attention mechanism to handle
missing values; The second component of TS-Diffusion is a diffusion model that
learns from the representation of time series. These time-series
representations can have a complex distribution because of their high
dimensions; The third part is a decoder of another ODE that generates time
series with irregularities and missing values given their representations. We
have conducted extensive experiments on multiple time-series datasets,
demonstrating that TS-Diffusion achieves excellent results on both conventional
and complex time series and significantly outperforms previous baselines.
Related papers
- Retrieval-Augmented Diffusion Models for Time Series Forecasting [19.251274915003265]
We propose a Retrieval- Augmented Time series Diffusion model (RATD)
RATD consists of two parts: an embedding-based retrieval process and a reference-guided diffusion model.
Our approach allows leveraging meaningful samples within the database to aid in sampling, thus maximizing the utilization of datasets.
arXiv Detail & Related papers (2024-10-24T13:14:39Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - SeqLink: A Robust Neural-ODE Architecture for Modelling Partially Observed Time Series [11.261457967759688]
We introduce SeqLink, an innovative neural architecture designed to enhance the robustness of sequence representation.
We demonstrate that SeqLink improves the modelling of intermittent time series, consistently outperforming state-of-the-art approaches.
arXiv Detail & Related papers (2022-12-07T10:25:59Z) - Tripletformer for Probabilistic Interpolation of Irregularly sampled
Time Series [6.579888565581481]
We present a novel encoder-decoder architecture called "Tripletformer" for probabilistic of irregularly sampled time series with missing values.
This attention-based model operates on sets of observations, where each element is composed of a triple time, channel, and value.
Results indicate an improvement in negative loglikelihood error by up to 32% on real-world datasets and 85% on synthetic datasets.
arXiv Detail & Related papers (2022-10-05T08:31:05Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Learning from Irregularly-Sampled Time Series: A Missing Data
Perspective [18.493394650508044]
Irregularly-sampled time series occur in many domains including healthcare.
We model irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function.
We propose learning methods for this framework based on variational autoencoders and generative adversarial networks.
arXiv Detail & Related papers (2020-08-17T20:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.