PwR: Exploring the Role of Representations in Conversational Programming
- URL: http://arxiv.org/abs/2309.09495v1
- Date: Mon, 18 Sep 2023 05:38:23 GMT
- Title: PwR: Exploring the Role of Representations in Conversational Programming
- Authors: Pradyumna YM, Vinod Ganesan, Dinesh Kumar Arumugam, Meghna Gupta,
Nischith Shadagopan, Tanay Dixit, Sameer Segal, Pratyush Kumar, Mohit Jain,
Sriram Rajamani
- Abstract summary: We introduce Programming with Representations (PwR), an approach that uses representations to convey the system's understanding back to the user in natural language.
We find that representations significantly improve understandability, and instilled a sense of agency among our participants.
- Score: 17.838776812138626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have revolutionized programming and software
engineering. AI programming assistants such as GitHub Copilot X enable
conversational programming, narrowing the gap between human intent and code
generation. However, prior literature has identified a key challenge--there is
a gap between user's mental model of the system's understanding after a
sequence of natural language utterances, and the AI system's actual
understanding. To address this, we introduce Programming with Representations
(PwR), an approach that uses representations to convey the system's
understanding back to the user in natural language. We conducted an in-lab
task-centered study with 14 users of varying programming proficiency and found
that representations significantly improve understandability, and instilled a
sense of agency among our participants. Expert programmers use them for
verification, while intermediate programmers benefit from confirmation. Natural
language-based development with LLMs, coupled with representations, promises to
transform software development, making it more accessible and efficient.
Related papers
- Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers [0.0]
This study presents a thorough evaluation of leading programming assistants, including ChatGPT, Gemini(Bard AI), AlphaCode, and GitHub Copilot.
It emphasizes the need for ethical developmental practices to actualize AI models' full potential.
arXiv Detail & Related papers (2024-11-14T06:40:55Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - AIOS Compiler: LLM as Interpreter for Natural Language Programming and Flow Programming of AI Agents [38.580779075892636]
We develop a novel system for Code Representation and Execution (CoRE)
The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents.
During the execution, we incorporate external memory to minimize redundancy.
arXiv Detail & Related papers (2024-05-11T04:29:03Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Learning a Hierarchical Planner from Humans in Multiple Generations [21.045112705349222]
We present natural programming, a library learning system that combines programmatic learning with a hierarchical planner.
A user teaches the system via curriculum building, by identifying a challenging yet not impossible goal.
The system solves for the goal via hierarchical planning, using the linguistic hints to guide its probability distribution.
arXiv Detail & Related papers (2023-10-17T22:28:13Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z) - PADL: Language-Directed Physics-Based Character Control [66.517142635815]
We present PADL, which allows users to issue natural language commands for specifying high-level tasks and low-level skills that a character should perform.
We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills.
arXiv Detail & Related papers (2023-01-31T18:59:22Z) - What is it like to program with artificial intelligence? [10.343988028594612]
Large language models can generate code to solve a variety of problems expressed in natural language.
This technology has already been commercialised in at least one widely-used programming editor extension: GitHub Copilot.
We explore how programming with large language models (LLM-assisted programming) is similar to, and differs from, prior conceptualisations of programmer assistance.
arXiv Detail & Related papers (2022-08-12T10:48:46Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Convo: What does conversational programming need? An exploration of
machine learning interface design [8.831954614241232]
We compare different input methods to a conversational programming system we developed.
participants completed novice and advanced tasks using voice-based, text-based, and voice-or-text-based systems.
Results show that future conversational programming tools should be tailored to users' programming experience.
arXiv Detail & Related papers (2020-03-03T03:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.