Leveraging Image-based Generative Adversarial Networks for Time Series
Generation
- URL: http://arxiv.org/abs/2112.08060v2
- Date: Thu, 31 Aug 2023 12:41:13 GMT
- Title: Leveraging Image-based Generative Adversarial Networks for Time Series
Generation
- Authors: Justin Hellermann, Stefan Lessmann
- Abstract summary: We propose a two-dimensional image representation for time series, the Extended Intertemporal Return Plot (XIRP)
Our approach captures the intertemporal time series dynamics in a scale-invariant and invertible way, reducing training time and improving sample quality.
- Score: 4.541582055558865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models for images have gained significant attention in computer
vision and natural language processing due to their ability to generate
realistic samples from complex data distributions. To leverage the advances of
image-based generative models for the time series domain, we propose a
two-dimensional image representation for time series, the Extended
Intertemporal Return Plot (XIRP). Our approach captures the intertemporal time
series dynamics in a scale-invariant and invertible way, reducing training time
and improving sample quality. We benchmark synthetic XIRPs obtained by an
off-the-shelf Wasserstein GAN with gradient penalty (WGAN-GP) to other image
representations and models regarding similarity and predictive ability metrics.
Our novel, validated image representation for time series consistently and
significantly outperforms a state-of-the-art RNN-based generative model
regarding predictive ability. Further, we introduce an improved stochastic
inversion to substantially improve simulation quality regardless of the
representation and provide the prospect of transfer potentials in other
domains.
Related papers
- TimeLDM: Latent Diffusion Model for Unconditional Time Series Generation [3.1810479039089667]
We propose TimeLDM, a novel latent diffusion model for high-quality time series generation.
TimeLDM is composed of a variational autoencoder that encodes time series into an informative and smoothed latent content.
We evaluate the ability of our method to generate synthetic time series with simulated and realistic datasets, benchmark the performance against existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-05T01:47:20Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Image Inpainting via Tractable Steering of Diffusion Models [54.13818673257381]
This paper proposes to exploit the ability of Tractable Probabilistic Models (TPMs) to exactly and efficiently compute the constrained posterior.
Specifically, this paper adopts a class of expressive TPMs termed Probabilistic Circuits (PCs)
We show that our approach can consistently improve the overall quality and semantic coherence of inpainted images with only 10% additional computational overhead.
arXiv Detail & Related papers (2023-11-28T21:14:02Z) - TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual
Vision Transformer for Fast Arbitrary One-Shot Image Generation [11.207512995742999]
One-shot image generation (OSG) with generative adversarial networks that learn from the internal patches of a given image has attracted world wide attention.
We propose a novel structure-preserved method TcGAN with individual vision transformer to overcome the shortcomings of the existing one-shot image generation methods.
arXiv Detail & Related papers (2023-02-16T03:05:59Z) - Universal Generative Modeling in Dual-domain for Dynamic MR Imaging [22.915796840971396]
We propose a k-space and image Du-al-Domain collaborative Universal Generative Model (DD-UGM) to reconstruct highly under-sampled measurements.
More precisely, we extract prior components from both image and k-space domains via a universal generative model and adaptively handle these prior components for faster processing.
arXiv Detail & Related papers (2022-12-15T03:04:48Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - Global Context with Discrete Diffusion in Vector Quantised Modelling for
Image Generation [19.156223720614186]
The integration of Vector Quantised Variational AutoEncoder with autoregressive models as generation part has yielded high-quality results on image generation.
We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context.
arXiv Detail & Related papers (2021-12-03T09:09:34Z) - Moment evolution equations and moment matching for stochastic image
EPDiff [68.97335984455059]
Models of image deformation allow study of time-continuous effects transforming images by deforming the image domain.
Applications include medical image analysis with both population trends and random subject specific variation.
We use moment approximations of the corresponding Ito diffusion to construct estimators for statistical inference in the parameters full model.
arXiv Detail & Related papers (2021-10-07T11:08:11Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.