Natural Language Commanding via Program Synthesis
        - URL: http://arxiv.org/abs/2306.03460v1
 - Date: Tue, 6 Jun 2023 07:28:49 GMT
 - Title: Natural Language Commanding via Program Synthesis
 - Authors: Apurva Gandhi, Thong Q. Nguyen, Huitian Jiao, Robert Steen, Ameya
  Bhatawdekar
 - Abstract summary: We present Semantic Interpreter, a natural language-friendly AI system for productivity software such as Microsoft Office.
LLMs are excellent at understanding user intent expressed as natural language, but not sufficient for fulfilling application-specific user intent.
We introduce the Office Domain Specific Language (O), a concise, high-level language specialized for performing actions in and interacting with entities in Office applications.
 - Score: 0.29360071145551064
 - License: http://creativecommons.org/licenses/by-nc-nd/4.0/
 - Abstract:   We present Semantic Interpreter, a natural language-friendly AI system for
productivity software such as Microsoft Office that leverages large language
models (LLMs) to execute user intent across application features. While LLMs
are excellent at understanding user intent expressed as natural language, they
are not sufficient for fulfilling application-specific user intent that
requires more than text-to-text transformations. We therefore introduce the
Office Domain Specific Language (ODSL), a concise, high-level language
specialized for performing actions in and interacting with entities in Office
applications. Semantic Interpreter leverages an Analysis-Retrieval prompt
construction method with LLMs for program synthesis, translating natural
language user utterances to ODSL programs that can be transpiled to application
APIs and then executed. We focus our discussion primarily on a research
exploration for Microsoft PowerPoint.
 
       
      
        Related papers
        - Skill Discovery for Software Scripting Automation via Offline   Simulations with LLMs [63.10710876536337]
We propose an offline simulation framework to curate a software-specific skillset, a collection of verified scripts.
Our framework comprises two components: (1) task creation, using top-down functionality and bottom-up API synergy exploration to generate helpful tasks.
 Experiments with Adobe Illustrator demonstrate that our framework significantly improves automation success rates, reduces response time, and saves runtime token costs.
arXiv  Detail & Related papers  (2025-04-29T04:03:37Z) - Statically Contextualizing Large Language Models with Typed Holes [4.180458188910334]
Large language models (LLMs) have reshaped the landscape of program synthesis.
LLMs often hallucinate broken code because they lack appropriate context.
This paper demonstrates that tight integration with the type and binding structure of a language can address this contextualization problem.
arXiv  Detail & Related papers  (2024-09-02T03:29:00Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv  Detail & Related papers  (2024-06-25T02:18:15Z) - Synthetic Programming Elicitation for Text-to-Code in Very Low-Resource   Programming and Formal Languages [21.18996339478024]
We introduce emphsynthetic programming elicitation and compilation (SPEAC)
SPEAC produces syntactically correct programs more frequently and without sacrificing semantic correctness.
We empirically evaluate the performance of SPEAC in a case study for the UCLID5 formal verification language.
arXiv  Detail & Related papers  (2024-06-05T22:16:19Z) - AIOS Compiler: LLM as Interpreter for Natural Language Programming and   Flow Programming of AI Agents [38.580779075892636]
We develop a novel system for Code Representation and Execution (CoRE)
The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents.
During the execution, we incorporate external memory to minimize redundancy.
arXiv  Detail & Related papers  (2024-05-11T04:29:03Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied   Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv  Detail & Related papers  (2024-03-20T17:58:12Z) - kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest
  Neighbor In-Context Learning [50.40636157214161]
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language.
LLMs have achieved impressive performance in computer programs based on a natural language prompt.
This paper focuses on harnessing the capabilities of LLMs for semantic parsing tasks.
arXiv  Detail & Related papers  (2023-12-17T17:26:50Z) - Interpreting User Requests in the Context of Natural Language Standing
  Instructions [89.12540932734476]
We develop NLSI, a language-to-program dataset consisting of over 2.4K dialogues spanning 17 domains.
A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue.
arXiv  Detail & Related papers  (2023-11-16T11:19:26Z) - PADL: Language-Directed Physics-Based Character Control [66.517142635815]
We present PADL, which allows users to issue natural language commands for specifying high-level tasks and low-level skills that a character should perform.
We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills.
arXiv  Detail & Related papers  (2023-01-31T18:59:22Z) - Prompting Is Programming: A Query Language for Large Language Models [5.8010446129208155]
We present the novel idea of Language Model Programming (LMP)
LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting.
We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way.
arXiv  Detail & Related papers  (2022-12-12T18:09:09Z) - Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified
  Multilingual Prompt [98.26682501616024]
We propose a novel model that uses a unified prompt for all languages, called UniPrompt.
The unified prompt is computation by a multilingual PLM to produce language-independent representation.
Our proposed methods can significantly outperform the strong baselines across different languages.
arXiv  Detail & Related papers  (2022-02-23T11:57:52Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv  Detail & Related papers  (2021-06-18T15:08:47Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.