Conditional Hybrid GAN for Sequence Generation
- URL: http://arxiv.org/abs/2009.08616v1
- Date: Fri, 18 Sep 2020 03:52:55 GMT
- Title: Conditional Hybrid GAN for Sequence Generation
- Authors: Yi Yu, Abhishek Srivastava, Rajiv Ratn Shah
- Abstract summary: We propose a novel conditional hybrid GAN (C-Hybrid-GAN) to solve this issue.
We exploit the Gumbel-Softmax technique to approximate the distribution of discrete-valued sequences.
We demonstrate that the proposed C-Hybrid-GAN outperforms the existing methods in context-conditioned discrete-valued sequence generation.
- Score: 56.67961004064029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional sequence generation aims to instruct the generation procedure by
conditioning the model with additional context information, which is a
self-supervised learning issue (a form of unsupervised learning with
supervision information from data itself). Unfortunately, the current
state-of-the-art generative models have limitations in sequence generation with
multiple attributes. In this paper, we propose a novel conditional hybrid GAN
(C-Hybrid-GAN) to solve this issue. Discrete sequence with triplet attributes
are separately generated when conditioned on the same context. Most
importantly, relational reasoning technique is exploited to model not only the
dependency inside each sequence of the attribute during the training of the
generator but also the consistency among the sequences of attributes during the
training of the discriminator. To avoid the non-differentiability problem in
GANs encountered during discrete data generation, we exploit the Gumbel-Softmax
technique to approximate the distribution of discrete-valued sequences.Through
evaluating the task of generating melody (associated with note, duration, and
rest) from lyrics, we demonstrate that the proposed C-Hybrid-GAN outperforms
the existing methods in context-conditioned discrete-valued sequence
generation.
Related papers
- Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen [76.02070962797794]
We present Cell Flow for Generation, a flow-based conditional generative model for multi-modal single-cell counts.
Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks.
arXiv Detail & Related papers (2024-07-16T14:05:03Z) - A New Paradigm for Generative Adversarial Networks based on Randomized
Decision Rules [8.36840154574354]
The Generative Adversarial Network (GAN) was recently introduced in the literature as a novel machine learning method for training generative models.
It has many applications in statistics such as nonparametric clustering and nonparametric conditional independence tests.
In this paper, we identify the reasons why the GAN suffers from this issue, and to address it, we propose a new formulation for the GAN based on randomized decision rules.
arXiv Detail & Related papers (2023-06-23T17:50:34Z) - SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking [60.109453252858806]
A maximum-likelihood (MLE) objective does not match a downstream use-case of autoregressively generating high-quality sequences.
We formulate sequence generation as an imitation learning (IL) problem.
This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset.
Our resulting method, SequenceMatch, can be implemented without adversarial training or architectural changes.
arXiv Detail & Related papers (2023-06-08T17:59:58Z) - Instructed Diffuser with Temporal Condition Guidance for Offline
Reinforcement Learning [71.24316734338501]
We propose an effective temporally-conditional diffusion model coined Temporally-Composable diffuser (TCD)
TCD extracts temporal information from interaction sequences and explicitly guides generation with temporal conditions.
Our method reaches or matches the best performance compared with prior SOTA baselines.
arXiv Detail & Related papers (2023-06-08T02:12:26Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Seq-HyGAN: Sequence Classification via Hypergraph Attention Network [0.0]
Sequence classification has a wide range of real-world applications in different domains, such as genome classification in health and anomaly detection in business.
The lack of explicit features in sequence data makes it difficult for machine learning models.
We propose a novel Hypergraph Attention Network model, namely Seq-HyGAN.
arXiv Detail & Related papers (2023-03-04T11:53:33Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - Information-theoretic stochastic contrastive conditional GAN:
InfoSCC-GAN [6.201770337181472]
We present a contrastive conditional generative adversarial network (Info SCC-GAN) with an explorable latent space.
Info SCC-GAN is derived based on an information-theoretic formulation of mutual information between input data and latent space representation.
Experiments show that Info SCC-GAN outperforms the "vanilla" EigenGAN in the image generation on AFHQ and CelebA datasets.
arXiv Detail & Related papers (2021-12-17T17:56:30Z) - Symbolic Music Generation with Diffusion Models [4.817429789586127]
We present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder.
We show strong unconditional generation and post-hoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings.
arXiv Detail & Related papers (2021-03-30T05:48:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.