GenNI: Human-AI Collaboration for Data-Backed Text Generation
- URL: http://arxiv.org/abs/2110.10185v1
- Date: Tue, 19 Oct 2021 18:07:07 GMT
- Title: GenNI: Human-AI Collaboration for Data-Backed Text Generation
- Authors: Hendrik Strobelt, Jambay Kinley, Robert Krueger, Johanna Beyer,
Hanspeter Pfister, Alexander M. Rush
- Abstract summary: Table2Text systems generate textual output based on structured data utilizing machine learning.
GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text.
- Score: 102.08127062293111
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Table2Text systems generate textual output based on structured data utilizing
machine learning. These systems are essential for fluent natural language
interfaces in tools such as virtual assistants; however, left to generate
freely these ML systems often produce misleading or unexpected outputs. GenNI
(Generation Negotiation Interface) is an interactive visual system for
high-level human-AI collaboration in producing descriptive text. The tool
utilizes a deep learning model designed with explicit control states. These
controls allow users to globally constrain model generations, without
sacrificing the representation power of the deep learning models. The visual
interface makes it possible for users to interact with AI systems following a
Refine-Forecast paradigm to ensure that the generation system acts in a manner
human users find suitable. We report multiple use cases on two experiments that
improve over uncontrolled generation approaches, while at the same time
providing fine-grained control. A demo and source code are available at
https://genni.vizhub.ai .
Related papers
- Generative AI Systems: A Systems-based Perspective on Generative AI [12.400966570867322]
Large Language Models (LLMs) have revolutionized AI systems by enabling communication with machines using natural language.
Recent developments in Generative AI (GenAI) have shown great promise in using LLMs as multimodal systems.
This paper aims to explore and state new research directions in Generative AI Systems.
arXiv Detail & Related papers (2024-06-25T12:51:47Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - Language-Driven Representation Learning for Robotics [115.93273609767145]
Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks.
We introduce a framework for language-driven representation learning from human videos and captions.
We find that Voltron's language-driven learning outperform the prior-of-the-art, especially on targeted problems requiring higher-level control.
arXiv Detail & Related papers (2023-02-24T17:29:31Z) - An Overview on Controllable Text Generation via Variational
Auto-Encoders [15.97186478109836]
Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans.
Latent variable models (LVM) such as variational auto-encoders (VAEs) are designed to characterize the distributional pattern of textual data.
This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation.
arXiv Detail & Related papers (2022-11-15T07:36:11Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Reshaping Robot Trajectories Using Natural Language Commands: A Study of
Multi-Modal Data Alignment Using Transformers [33.7939079214046]
We provide a flexible language-based interface for human-robot collaboration.
We take advantage of recent advancements in the field of large language models to encode the user command.
We train the model using imitation learning over a dataset containing robot trajectories modified by language commands.
arXiv Detail & Related papers (2022-03-25T01:36:56Z) - Plug-and-Blend: A Framework for Controllable Story Generation with
Blended Control Codes [11.053902512072813]
We describe a controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics)
In the context of automated story generation, this allows a human user loose or fine-grained control of the topics and transitions between them.
A human participant evaluation shows that the generated stories are observably transitioning between two topics.
arXiv Detail & Related papers (2021-03-23T03:15:14Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - On the interaction between supervision and self-play in emergent
communication [82.290338507106]
We investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency.
We find that first training agents via supervised learning on human data followed by self-play outperforms the converse.
arXiv Detail & Related papers (2020-02-04T02:35:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.