Time-series Generation by Contrastive Imitation
- URL: http://arxiv.org/abs/2311.01388v1
- Date: Thu, 2 Nov 2023 16:45:25 GMT
- Title: Time-series Generation by Contrastive Imitation
- Authors: Daniel Jarrett, Ioana Bica, Mihaela van der Schaar
- Abstract summary: We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
- Score: 87.51882102248395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider learning a generative model for time-series data. The sequential
setting poses a unique challenge: Not only should the generator capture the
conditional dynamics of (stepwise) transitions, but its open-loop rollouts
should also preserve the joint distribution of (multi-step) trajectories. On
one hand, autoregressive models trained by MLE allow learning and computing
explicit transition distributions, but suffer from compounding error during
rollouts. On the other hand, adversarial models based on GAN training alleviate
such exposure bias, but transitions are implicit and hard to assess. In this
work, we study a generative framework that seeks to combine the strengths of
both: Motivated by a moment-matching objective to mitigate compounding error,
we optimize a local (but forward-looking) transition policy, where the
reinforcement signal is provided by a global (but stepwise-decomposable) energy
model trained by contrastive estimation. At training, the two components are
learned cooperatively, avoiding the instabilities typical of adversarial
objectives. At inference, the learned policy serves as the generator for
iterative sampling, and the learned energy serves as a trajectory-level measure
for evaluating sample quality. By expressly training a policy to imitate
sequential behavior of time-series features in a dataset, this approach
embodies "generation by imitation". Theoretically, we illustrate the
correctness of this formulation and the consistency of the algorithm.
Empirically, we evaluate its ability to generate predictively useful samples
from real-world datasets, verifying that it performs at the standard of
existing benchmarks.
Related papers
- Parallelly Tempered Generative Adversarial Networks [7.94957965474334]
A generative adversarial network (GAN) has been a representative backbone model in generative artificial intelligence (AI)
This work analyzes the training instability and inefficiency in the presence of mode collapse by linking it to multimodality in the target distribution.
With our newly developed GAN objective function, the generator can learn all the tempered distributions simultaneously.
arXiv Detail & Related papers (2024-11-18T18:01:13Z) - Diffusing States and Matching Scores: A New Framework for Imitation Learning [16.941612670582522]
Adversarial Imitation Learning is traditionally framed as a two-player zero-sum game between a learner and an adversarially chosen cost function.
In recent years, diffusion models have emerged as a non-adversarial alternative to GANs.
We show our approach outperforms GAN-style imitation learning baselines across various continuous control problems.
arXiv Detail & Related papers (2024-10-17T17:59:25Z) - TransFusion: Covariate-Shift Robust Transfer Learning for High-Dimensional Regression [11.040033344386366]
We propose a two-step method with a novel fused-regularizer to improve the learning performance on a target task with limited samples.
Nonasymptotic bound is provided for the estimation error of the target model.
We extend the method to a distributed setting, allowing for a pretraining-finetuning strategy.
arXiv Detail & Related papers (2024-04-01T14:58:16Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Gradual Domain Adaptation in the Wild:When Intermediate Distributions
are Absent [32.906658998929394]
We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution.
We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains.
arXiv Detail & Related papers (2021-06-10T22:47:06Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.