NL2INTERFACE: Interactive Visualization Interface Generation from
Natural Language Queries
- URL: http://arxiv.org/abs/2209.08834v1
- Date: Mon, 19 Sep 2022 08:31:50 GMT
- Title: NL2INTERFACE: Interactive Visualization Interface Generation from
Natural Language Queries
- Authors: Yiru Chen and Ryan Li and Austin Mac and Tianbao Xie and Tao Yu and
Eugene Wu
- Abstract summary: NL2INTERFACE generates interactive multi-visualization interfaces from natural language queries.
Users can interact with the interfaces to easily transform the data and quickly see the results in the visualizations.
- Score: 19.355412315639462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop NL2INTERFACE to explore the potential of generating usable
interactive multi-visualization interfaces from natural language queries. With
NL2INTERFACE, users can directly write natural language queries to
automatically generate a fully interactive multi-visualization interface
without any extra effort of learning a tool or programming language. Further,
users can interact with the interfaces to easily transform the data and quickly
see the results in the visualizations.
Related papers
- Athanor: Authoring Action Modification-based Interactions on Static Visualizations via Natural Language [9.92682960014568]
Athanor is a novel approach to transform existing static visualizations into interactive ones using multimodal large language models and natural language instructions.<n>Athanor allows users to effortlessly author interactions through natural language instructions, eliminating the need for programming.
arXiv Detail & Related papers (2026-01-25T08:08:42Z) - YAC: Bridging Natural Language and Interactive Visual Exploration with Generative AI for Biomedical Data Discovery [27.577426841656788]
We bridge the gap between natural language and interactive visualizations by generating structured declarative output with a multi-agent system.<n>We include widgets, which allow users to adjust the values of that structured output through user interface elements.
arXiv Detail & Related papers (2025-09-23T15:57:42Z) - Generative Interfaces for Language Models [70.25765232527762]
We propose a paradigm in which large language models (LLMs) respond to user queries by proactively generating user interfaces (UIs)<n>Our framework leverages structured interface-specific representations and iterative refinements to translate user queries into task-specific UIs.<n>Results show that generative interfaces consistently outperform conversational ones, with up to a 72% improvement in human preference.
arXiv Detail & Related papers (2025-08-26T17:43:20Z) - InterChat: Enhancing Generative Visual Analytics using Multimodal Interactions [22.007942964950217]
We develop InterChat, a generative visual analytics system that combines direct manipulation of visual elements with natural language inputs.
This integration enables precise intent communication and supports progressive, visually driven exploratory data analyses.
arXiv Detail & Related papers (2025-03-06T05:35:19Z) - Large Language User Interfaces: Voice Interactive User Interfaces powered by LLMs [5.06113628525842]
We present a framework that can serve as an intermediary between a user and their user interface (UI)
We employ a system that stands upon textual semantic mappings of UI components, in the form of annotations.
Our engine can classify the most appropriate application, extract relevant parameters, and subsequently execute precise predictions of the user's expected actions.
arXiv Detail & Related papers (2024-02-07T21:08:49Z) - Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey [30.836162812277085]
The rise of large language models (LLMs) has further advanced this field, opening new avenues for natural language processing techniques.
We introduce the fundamental concepts and techniques underlying these interfaces with a particular emphasis on semantic parsing.
This includes a deep dive into the influence of LLMs, highlighting their strengths, limitations, and potential for future improvements.
arXiv Detail & Related papers (2023-10-27T05:01:20Z) - AmadeusGPT: a natural language interface for interactive animal
behavioral analysis [65.55906175884748]
We introduce AmadeusGPT: a natural language interface that turns natural language descriptions of behaviors into machine-executable code.
We show we can produce state-of-the-art performance on the MABE 2022 behavior challenge tasks.
AmadeusGPT presents a novel way to merge deep biological knowledge, large-language models, and core computer vision modules into a more naturally intelligent system.
arXiv Detail & Related papers (2023-07-10T19:15:17Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot
Cross-lingual Information Extraction [47.88887327545667]
In this study, a syntax-augmented hierarchical interactive encoder (SHINE) is proposed to transfer cross-lingual IE knowledge.
SHINE is capable of interactively capturing complementary information between features and contextual information.
Experiments across seven languages on three IE tasks and four benchmarks verify the effectiveness and generalization ability of the proposed method.
arXiv Detail & Related papers (2023-05-21T08:02:06Z) - Enabling Conversational Interaction with Mobile UI using Large Language
Models [15.907868408556885]
To perform diverse UI tasks with natural language, developers typically need to create separate datasets and models for each specific task.
This paper investigates the feasibility of enabling versatile conversational interactions with mobile UIs using a single language model.
arXiv Detail & Related papers (2022-09-18T20:58:39Z) - CLEAR: Improving Vision-Language Navigation with Cross-Lingual,
Environment-Agnostic Representations [98.30038910061894]
Vision-and-Language Navigation (VLN) tasks require an agent to navigate through the environment based on language instructions.
We propose CLEAR: Cross-Lingual and Environment-Agnostic Representations.
Our language and visual representations can be successfully transferred to the Room-to-Room and Cooperative Vision-and-Dialogue Navigation task.
arXiv Detail & Related papers (2022-07-05T17:38:59Z) - Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task
Feasibility in Interactive Visual Environments [54.405920619915655]
We introduce Mobile app Tasks with Iterative Feedback (MoTIF), a dataset with natural language commands for the greatest number of interactive environments to date.
MoTIF is the first to contain natural language requests for interactive environments that are not satisfiable.
We perform initial feasibility classification experiments and only reach an F1 score of 37.3, verifying the need for richer vision-language representations.
arXiv Detail & Related papers (2021-04-17T14:48:02Z) - VECO: Variable and Flexible Cross-lingual Pre-training for Language
Understanding and Generation [77.82373082024934]
We plug a cross-attention module into the Transformer encoder to explicitly build the interdependence between languages.
It can effectively avoid the degeneration of predicting masked words only conditioned on the context in its own language.
The proposed cross-lingual model delivers new state-of-the-art results on various cross-lingual understanding tasks of the XTREME benchmark.
arXiv Detail & Related papers (2020-10-30T03:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.