Learning Implicit Text Generation via Feature Matching
- URL: http://arxiv.org/abs/2005.03588v2
- Date: Sat, 9 May 2020 00:17:49 GMT
- Title: Learning Implicit Text Generation via Feature Matching
- Authors: Inkit Padhi, Pierre Dognin, Ke Bai, Cicero Nogueira dos Santos, Vijil
Chenthamarakshan, Youssef Mroueh, Payel Das
- Abstract summary: Generative feature matching network (GFMN) is an approach for training implicit generative models for images.
We present new GFMN formulations that are effective for sequential data.
- Score: 31.782724169557703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative feature matching network (GFMN) is an approach for training
implicit generative models for images by performing moment matching on features
from pre-trained neural networks. In this paper, we present new GFMN
formulations that are effective for sequential data. Our experimental results
show the effectiveness of the proposed method, SeqGFMN, for three distinct
generation tasks in English: unconditional text generation, class-conditional
text generation, and unsupervised text style transfer. SeqGFMN is stable to
train and outperforms various adversarial approaches for text generation and
text style transfer.
Related papers
- Enhancing Text Generation in Joint NLG/NLU Learning Through Curriculum Learning, Semi-Supervised Training, and Advanced Optimization Techniques [0.0]
This research paper developed a novel approach to improve text generation in the context of joint Natural Language Generation (NLG) and Natural Language Understanding (NLU) learning.
The data is prepared by gathering and preprocessing annotated datasets, including cleaning, tokenization, stemming, and stop-word removal.
Transformer-based encoders and decoders, capturing long range dependencies and improving source-target sequence modelling.
Reinforcement learning with policy gradient techniques, semi-supervised training, improved attention mechanisms, and differentiable approximations are employed to fine-tune the models and handle complex linguistic tasks effectively.
arXiv Detail & Related papers (2024-10-17T12:43:49Z) - Text-to-Image Generation via Implicit Visual Guidance and Hypernetwork [38.55086153299993]
We develop an approach for text-to-image generation that embraces additional retrieval images, driven by a combination of implicit visual guidance loss and generative objectives.
We propose a novel hypernetwork modulated visual-text encoding scheme to predict the weight update of the encoding layer.
Experimental results show that our model guided with additional retrieval visual data outperforms existing GAN-based models.
arXiv Detail & Related papers (2022-08-17T19:25:00Z) - Event Transition Planning for Open-ended Text Generation [55.729259805477376]
Open-ended text generation tasks require models to generate a coherent continuation given limited preceding context.
We propose a novel two-stage method which explicitly arranges the ensuing events in open-ended text generation.
Our approach can be understood as a specially-trained coarse-to-fine algorithm.
arXiv Detail & Related papers (2022-04-20T13:37:51Z) - GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer [119.70961704127157]
Non-parallel text style transfer has attracted increasing research interests in recent years.
Current approaches still lack the ability to preserve the content and even logic of original sentences.
We propose a method called Graph Transformer based Auto-GTAE, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level.
arXiv Detail & Related papers (2021-02-01T11:08:45Z) - Few-Shot Text Generation with Pattern-Exploiting Training [12.919486518128734]
In this paper, we show that the underlying idea can also be applied to text generation tasks.
We adapt Pattern-Exploiting Training (PET), a recently proposed few-shot approach, for finetuning generative language models on text generation tasks.
arXiv Detail & Related papers (2020-12-22T10:53:07Z) - Improving Text Generation with Student-Forcing Optimal Transport [122.11881937642401]
We propose using optimal transport (OT) to match the sequences generated in training and testing modes.
An extension is also proposed to improve the OT learning, based on the structural and contextual information of the text sequences.
The effectiveness of the proposed method is validated on machine translation, text summarization, and text generation tasks.
arXiv Detail & Related papers (2020-10-12T19:42:25Z) - Controllable Text Generation with Focused Variation [71.07811310799664]
Focused-Variation Network (FVN) is a novel model to control language generation.
FVN learns disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity.
We evaluate FVN on two text generation datasets with annotated content and style, and show state-of-the-art performance as assessed by automatic and human evaluations.
arXiv Detail & Related papers (2020-09-25T06:31:06Z) - Unsupervised Text Generation by Learning from Search [86.51619839836331]
TGLS is a novel framework to unsupervised Text Generation by Learning.
We demonstrate the effectiveness of TGLS on two real-world natural language generation tasks, paraphrase generation and text formalization.
arXiv Detail & Related papers (2020-07-09T04:34:48Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z) - Syntax-driven Iterative Expansion Language Models for Controllable Text
Generation [2.578242050187029]
We propose a new paradigm for introducing a syntactic inductive bias into neural text generation.
Our experiments show that this paradigm is effective at text generation, with quality between LSTMs and Transformers, and comparable diversity.
arXiv Detail & Related papers (2020-04-05T14:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.