Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters
- URL: http://arxiv.org/abs/2210.03264v1
- Date: Fri, 7 Oct 2022 00:09:22 GMT
- Title: Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters
- Authors: Vinayshekhar Bannihatti Kumar, Rashmi Gangadharaiah, Dan Roth
- Abstract summary: We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
- Score: 66.17039929803933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research has shown that personality is a key driver to improve engagement and
user experience in conversational systems. Conversational agents should also
maintain a consistent persona to have an engaging conversation with a user.
However, text generation datasets are often crowd sourced and thereby have an
averaging effect where the style of the generation model is an average style of
all the crowd workers that have contributed to the dataset. While one can
collect persona-specific datasets for each task, it would be an expensive and
time consuming annotation effort. In this work, we propose a novel transfer
learning framework which updates only $0.3\%$ of model parameters to learn
style specific attributes for response generation. For the purpose of this
study, we tackle the problem of stylistic story ending generation using the ROC
stories Corpus. We learn style specific attributes from the
PERSONALITY-CAPTIONS dataset. Through extensive experiments and evaluation
metrics we show that our novel training procedure can improve the style
generation by 200 over Encoder-Decoder baselines while maintaining on-par
content relevance metrics with
Related papers
- Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - Specializing Small Language Models towards Complex Style Transfer via
Latent Attribute Pre-Training [29.143887057933327]
We introduce the concept of complex text style transfer tasks, and constructed complex text datasets based on two widely applicable scenarios.
Our dataset is the first large-scale data set of its kind, with 700 rephrased sentences and 1,000 sentences from the game Genshin Impact.
arXiv Detail & Related papers (2023-09-19T21:01:40Z) - An Overview on Controllable Text Generation via Variational
Auto-Encoders [15.97186478109836]
Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans.
Latent variable models (LVM) such as variational auto-encoders (VAEs) are designed to characterize the distributional pattern of textual data.
This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation.
arXiv Detail & Related papers (2022-11-15T07:36:11Z) - Leveraging Natural Supervision for Language Representation Learning and
Generation [8.083109555490475]
We describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision.
We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks.
We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations.
arXiv Detail & Related papers (2022-07-21T17:26:03Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - Deep Learning for Text Style Transfer: A Survey [71.8870854396927]
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text.
We present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data.
arXiv Detail & Related papers (2020-11-01T04:04:43Z) - STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story
Generation [48.56586847883825]
We introduce a dataset and evaluation platform built from STORIUM, an online collaborative storytelling community.
Our dataset contains 6K lengthy stories with fine-grained natural language annotations interspersed throughout each narrative.
We evaluate language models fine-tuned on our dataset by integrating them onto STORIUM, where real authors can query a model for suggested story continuations and then edit them.
arXiv Detail & Related papers (2020-10-04T23:26:09Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.