generAItor: Tree-in-the-Loop Text Generation for Language Model
Explainability and Adaptation
- URL: http://arxiv.org/abs/2403.07627v1
- Date: Tue, 12 Mar 2024 13:09:15 GMT
- Title: generAItor: Tree-in-the-Loop Text Generation for Language Model
Explainability and Adaptation
- Authors: Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias St\"ahle,
Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady
- Abstract summary: Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation.
We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs.
We present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities.
- Score: 28.715001906405362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are widely deployed in various downstream tasks,
e.g., auto-completion, aided writing, or chat-based text generation. However,
the considered output candidates of the underlying search algorithm are
under-explored and under-explained. We tackle this shortcoming by proposing a
tree-in-the-loop approach, where a visual representation of the beam search
tree is the central component for analyzing, explaining, and adapting the
generated outputs. To support these tasks, we present generAItor, a visual
analytics technique, augmenting the central beam search tree with various
task-specific widgets, providing targeted visualizations and interaction
possibilities. Our approach allows interactions on multiple levels and offers
an iterative pipeline that encompasses generating, exploring, and comparing
output candidates, as well as fine-tuning the model based on adapted data. Our
case study shows that our tool generates new insights in gender bias analysis
beyond state-of-the-art template-based methods. Additionally, we demonstrate
the applicability of our approach in a qualitative user study. Finally, we
quantitatively evaluate the adaptability of the model to few samples, as
occurring in text-generation use cases.
Related papers
- Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Multi-Level Explanations for Generative Language Models [45.82956216020136]
Perturbation-based explanation methods such as LIME and SHAP are commonly applied to text classification.
This work focuses on their extension to generative language models.
We propose a general framework called MExGen that can be instantiated with different attribution algorithms.
arXiv Detail & Related papers (2024-03-21T15:06:14Z) - Revealing the Unwritten: Visual Investigation of Beam Search Trees to
Address Language Model Prompting Challenges [29.856694782121448]
We identify several challenges associated with prompting large language models, categorized into data- and model-specific, linguistic, and socio-linguistic challenges.
A comprehensive examination of model outputs, including runner-up candidates and their corresponding probabilities, is needed to address these issues.
We introduce an interactive visual method for investigating the beam search tree, facilitating analysis of the decisions made by the model during generation.
arXiv Detail & Related papers (2023-10-17T13:20:16Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - An Overview on Controllable Text Generation via Variational
Auto-Encoders [15.97186478109836]
Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans.
Latent variable models (LVM) such as variational auto-encoders (VAEs) are designed to characterize the distributional pattern of textual data.
This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation.
arXiv Detail & Related papers (2022-11-15T07:36:11Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - A Unified Understanding of Deep NLP Models for Text Classification [88.35418976241057]
We have developed a visual analysis tool, DeepNLPVis, to enable a unified understanding of NLP models for text classification.
The key idea is a mutual information-based measure, which provides quantitative explanations on how each layer of a model maintains the information of input words in a sample.
A multi-level visualization, which consists of a corpus-level, a sample-level, and a word-level visualization, supports the analysis from the overall training set to individual samples.
arXiv Detail & Related papers (2022-06-19T08:55:07Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - Detection and Captioning with Unseen Object Classes [12.894104422808242]
Test images may contain visual objects with no corresponding visual or textual training examples.
We propose a detection-driven approach based on a generalized zero-shot detection model and a template-based sentence generation model.
Our experiments show that the proposed zero-shot detection model obtains state-of-the-art performance on the MS-COCO dataset.
arXiv Detail & Related papers (2021-08-13T10:43:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.