$\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text
Generation
- URL: http://arxiv.org/abs/2204.02030v1
- Date: Tue, 5 Apr 2022 07:34:12 GMT
- Title: $\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text
Generation
- Authors: Yu Bao, Hao Zhou, Shujian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai,
Jiajun Chen and Lei Li
- Abstract summary: parallel text generation has received widespread attention due to its success in generation efficiency.
In this paper, we propose $textitlatent$-GLAT, which employs the discrete latent variables to capture word categorical information.
Experiment results show that our method outperforms strong baselines without the help of an autoregressive model.
- Score: 65.29170569821093
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, parallel text generation has received widespread attention due to
its success in generation efficiency. Although many advanced techniques are
proposed to improve its generation quality, they still need the help of an
autoregressive model for training to overcome the one-to-many multi-modal
phenomenon in the dataset, limiting their applications. In this paper, we
propose $\textit{latent}$-GLAT, which employs the discrete latent variables to
capture word categorical information and invoke an advanced curriculum learning
technique, alleviating the multi-modality problem. Experiment results show that
our method outperforms strong baselines without the help of an autoregressive
model, which further broadens the application scenarios of the parallel
decoding paradigm.
Related papers
- Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - Meta-Task Prompting Elicits Embeddings from Large Language Models [54.757445048329735]
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation.
We generate high-quality sentence embeddings from Large Language Models without the need for model fine-tuning.
Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.
arXiv Detail & Related papers (2024-02-28T16:35:52Z) - Vector-Quantized Prompt Learning for Paraphrase Generation [18.40940464497253]
This paper proposes to generate diverse and high-quality paraphrases by exploiting the pre-trained models with instance-dependent prompts.
Extensive experiments demonstrate that the proposed method achieves new state-of-art results on three benchmark datasets.
arXiv Detail & Related papers (2023-11-25T07:13:06Z) - Learning from Bootstrapping and Stepwise Reinforcement Reward: A
Semi-Supervised Framework for Text Style Transfer [30.622772801446132]
We propose a semi-supervised framework for text style transfer.
First, the learning process is bootstrapped with supervision guided by automatically constructed pseudo-parallel pairs.
Then the model learns from unlabeled data via reinforcement rewards.
arXiv Detail & Related papers (2022-05-19T05:18:06Z) - On Adversarial Robustness of Synthetic Code Generation [1.2559148369195197]
This paper showcases the existence of significant dataset bias through different classes of adversarial examples.
We propose several dataset augmentation techniques to reduce bias and showcase their efficacy.
arXiv Detail & Related papers (2021-06-22T09:37:48Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.