Time Series (re)sampling using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2102.00208v1
- Date: Sat, 30 Jan 2021 10:58:15 GMT
- Title: Time Series (re)sampling using Generative Adversarial Networks
- Authors: Christian M. Dahl, Emil N. S{\o}rensen
- Abstract summary: We propose a novel bootstrap procedure for dependent data based on Generative Adversarial networks (GANs)
We show that the dynamics of common stationary time series processes can be learned by GANs.
We find that temporal convolutional neural networks provide a suitable design for the generator and discriminator.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel bootstrap procedure for dependent data based on Generative
Adversarial networks (GANs). We show that the dynamics of common stationary
time series processes can be learned by GANs and demonstrate that GANs trained
on a single sample path can be used to generate additional samples from the
process. We find that temporal convolutional neural networks provide a suitable
design for the generator and discriminator, and that convincing samples can be
generated on the basis of a vector of iid normal noise. We demonstrate the
finite sample properties of GAN sampling and the suggested bootstrap using
simulations where we compare the performance to circular block bootstrapping in
the case of resampling an AR(1) time series processes. We find that resampling
using the GAN can outperform circular block bootstrapping in terms of empirical
coverage.
Related papers
- Generative Modeling with Bayesian Sample Inference [50.07758840675341]
We derive a novel generative model from the simple act of Gaussian posterior inference.
Treating the generated sample as an unknown variable to infer lets us formulate the sampling process in the language of Bayesian probability.
Our model uses a sequence of prediction and posterior update steps to narrow down the unknown sample from a broad initial belief.
arXiv Detail & Related papers (2025-02-11T14:27:10Z) - Self-Guided Generation of Minority Samples Using Diffusion Models [57.319845580050924]
We present a novel approach for generating minority samples that live on low-density regions of a data manifold.
Our framework is built upon diffusion models, leveraging the principle of guided sampling.
Experiments on benchmark real datasets demonstrate that our approach can greatly improve the capability of creating realistic low-likelihood minority instances.
arXiv Detail & Related papers (2024-07-16T10:03:29Z) - Stable generative modeling using Schrödinger bridges [0.22499166814992438]
We propose a generative model combining Schr"odinger bridges and Langevin dynamics.
Our framework can be naturally extended to generate conditional samples and to Bayesian inference problems.
arXiv Detail & Related papers (2024-01-09T06:15:45Z) - A Block Metropolis-Hastings Sampler for Controllable Energy-based Text
Generation [78.81021361497311]
We develop a novel Metropolis-Hastings (MH) sampler that proposes re-writes of the entire sequence in each step via iterative prompting of a large language model.
Our new sampler allows for more efficient and accurate sampling from a target distribution and (b) allows generation length to be determined through the sampling procedure rather than fixed in advance.
arXiv Detail & Related papers (2023-12-07T18:30:15Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Sample and Predict Your Latent: Modality-free Sequential Disentanglement
via Contrastive Estimation [2.7759072740347017]
We introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals.
In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data.
Our method presents state-of-the-art results in comparison to existing techniques.
arXiv Detail & Related papers (2023-05-25T10:50:30Z) - Generative modeling for time series via Schr{\"o}dinger bridge [0.0]
We propose a novel generative model for time series based on Schr"dinger bridge (SB) approach.
This consists in the entropic via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series.
arXiv Detail & Related papers (2023-04-11T09:45:06Z) - Using Intermediate Forward Iterates for Intermediate Generator
Optimization [14.987013151525368]
Intermediate Generator Optimization can be incorporated into any standard autoencoder pipeline for the generative task.
We show applications of the IGO on two dense predictive tasks viz., image extrapolation, and point cloud denoising.
arXiv Detail & Related papers (2023-02-05T08:46:15Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Reparameterized Sampling for Generative Adversarial Networks [71.30132908130581]
We propose REP-GAN, a novel sampling method that allows general dependent proposals by REizing the Markov chains into the latent space of the generator.
Empirically, extensive experiments on synthetic and real datasets demonstrate that our REP-GAN largely improves the sample efficiency and obtains better sample quality simultaneously.
arXiv Detail & Related papers (2021-07-01T10:34:55Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.