Interactive Text Generation
- URL: http://arxiv.org/abs/2303.00908v3
- Date: Sat, 11 Nov 2023 20:43:13 GMT
- Title: Interactive Text Generation
- Authors: Felix Faltings and Michel Galley and Baolin Peng and Kiant\'e Brantley
and Weixin Cai and Yizhe Zhang and Jianfeng Gao and Bill Dolan
- Abstract summary: We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users.
We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts.
- Score: 75.23894005664533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Users interact with text, image, code, or other editors on a daily basis.
However, machine learning models are rarely trained in the settings that
reflect the interactivity between users and their editor. This is
understandable as training AI models with real users is not only slow and
costly, but what these models learn may be specific to user interface design
choices. Unfortunately, this means most of the research on text, code, and
image generation has focused on non-interactive settings, whereby the model is
expected to get everything right without accounting for any input from a user
who may be willing to help.
We introduce a new Interactive Text Generation task that allows training
generation models interactively without the costs of involving real users, by
using user simulators that provide edits that guide the model towards a given
target text. We train our interactive models using Imitation Learning, and our
experiments against competitive non-interactive generation models show that
models trained interactively are superior to their non-interactive
counterparts, even when all models are given the same budget of user inputs or
edits.
Related papers
- User-Specific Dialogue Generation with User Profile-Aware Pre-Training Model and Parameter-Efficient Fine-Tuning [2.2859366462875794]
User-specific dialogue aims to reproduce real-user dialogue beyond persona-based dialogue.
Fine-tuning using the target user's dialogue history is an efficient learning method for a user-specific model.
We propose a learning method for user-specific models by combining parameter-efficient fine-tuning with a pre-trained dialogue model.
arXiv Detail & Related papers (2024-09-02T01:30:40Z) - InteractiveVideo: User-Centric Controllable Video Generation with
Synergistic Multimodal Instructions [23.536645072596656]
$textitInteractiveVideo$ is a user-centric framework for video generation.
We propose a Synergistic Multimodal Instruction mechanism to seamlessly integrate users' multimodal instructions into generative models.
With $textitInteractiveVideo$, users are given the flexibility to meticulously tailor key aspects of a video.
arXiv Detail & Related papers (2024-02-05T14:24:46Z) - Customization Assistant for Text-to-image Generation [40.76198867803018]
We propose a new framework consists of a new model design and a novel training strategy.
The resulting assistant can perform customized generation in 2-5 seconds without any test time fine-tuning.
arXiv Detail & Related papers (2023-12-05T16:54:42Z) - Labeled Interactive Topic Models [10.555664965166232]
We introduce a user-friendly interaction for neural topic models.
This interaction permits users to assign a word label to a topic.
We evaluate our method through a human study, where users can relabel topics to find relevant documents.
arXiv Detail & Related papers (2023-11-15T23:18:01Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - GenNI: Human-AI Collaboration for Data-Backed Text Generation [102.08127062293111]
Table2Text systems generate textual output based on structured data utilizing machine learning.
GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text.
arXiv Detail & Related papers (2021-10-19T18:07:07Z) - Federated Learning of User Verification Models Without Sharing
Embeddings [73.27015469166166]
Federated User Verification (FedUV) is a framework in which users jointly learn a set of vectors and maximize the correlation of their instance embeddings with a secret linear combination of those vectors.
We show that choosing the linear combinations from the codewords of an error-correcting code allows users to collaboratively train the model without revealing their embedding vectors.
arXiv Detail & Related papers (2021-04-18T08:51:39Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.