Adapting a Language Model for Controlled Affective Text Generation
- URL: http://arxiv.org/abs/2011.04000v1
- Date: Sun, 8 Nov 2020 15:24:39 GMT
- Title: Adapting a Language Model for Controlled Affective Text Generation
- Authors: Ishika Singh and Ahsan Barkati and Tushar Goswamy and Ashutosh Modi
- Abstract summary: We adapt the state-of-the-art language generation models to generate affective (emotional) text.
We propose to incorporate emotion as prior for the probabilistic state-of-the-art text generation model such as GPT-2.
The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text.
- Score: 2.9267797650223653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human use language not just to convey information but also to express their
inner feelings and mental states. In this work, we adapt the state-of-the-art
language generation models to generate affective (emotional) text. We posit a
model capable of generating affect-driven and topic-focused sentences without
losing grammatical correctness as the affect intensity increases. We propose to
incorporate emotion as prior for the probabilistic state-of-the-art text
generation model such as GPT-2. The model gives a user the flexibility to
control the category and intensity of emotion as well as the topic of the
generated text. Previous attempts at modelling fine-grained emotions fall out
on grammatical correctness at extreme intensities, but our model is resilient
to this and delivers robust results at all intensities. We conduct automated
evaluations and human studies to test the performance of our model and provide
a detailed comparison of the results with other models. In all evaluations, our
model outperforms existing affective text generation models.
Related papers
- Personalized Text Generation with Fine-Grained Linguistic Control [9.668216418094316]
We focus on controlling fine-grained attributes spanning multiple linguistic dimensions.
We introduce a novel benchmark to train generative models and evaluate their ability to generate personalized text.
arXiv Detail & Related papers (2024-02-07T14:41:08Z) - Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation [82.5217996570387]
We adapt a pre-trained language model for auto-regressive text-to-image generation.
We find that pre-trained language models offer limited help.
arXiv Detail & Related papers (2023-11-27T07:19:26Z) - BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained
Transformer [77.28871523946418]
BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University.
It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio.
arXiv Detail & Related papers (2023-07-01T15:10:01Z) - Click: Controllable Text Generation with Sequence Likelihood Contrastive
Learning [69.35360098882606]
We introduce Click for controllable text generation, which needs no modification to the model architecture.
It employs a contrastive loss on sequence likelihood, which fundamentally decreases the generation probability of negative samples.
It also adopts a novel likelihood ranking-based strategy to construct contrastive samples from model generations.
arXiv Detail & Related papers (2023-06-06T01:56:44Z) - Estimating the Personality of White-Box Language Models [0.589889361990138]
Large-scale language models, which are trained on large corpora of text, are being used in a wide range of applications everywhere.
Existing research shows that these models can and do capture human biases.
Many of these biases, especially those that could potentially cause harm, are being well-investigated.
However, studies that infer and change human personality traits inherited by these models have been scarce or non-existent.
arXiv Detail & Related papers (2022-04-25T23:53:53Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z) - Fine-Grained Emotion Prediction by Modeling Emotion Definitions [26.098917459551167]
We propose a new framework for fine-grained emotion prediction in the text through emotion definition modeling.
Our models outperform existing state-of-the-art for fine-grained emotion dataset GoEmotions.
arXiv Detail & Related papers (2021-07-26T12:11:18Z) - EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional
Text-to-Speech Model [56.75775793011719]
We introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation.
Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding.
In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations.
arXiv Detail & Related papers (2021-06-17T08:34:21Z) - Residual Energy-Based Models for Text [46.22375671394882]
We show that the generations of auto-regressive language models can be reliably distinguished from real text by statistical discriminators.
This suggests that the auto-regressive models can be improved by incorporating the (globally normalized) discriminators into the generative process.
arXiv Detail & Related papers (2020-04-06T13:44:03Z) - Learning to Compare for Better Training and Evaluation of Open Domain
Natural Language Generation Models [23.62054164511058]
We propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT.
While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation.
arXiv Detail & Related papers (2020-02-12T15:52:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.