Using an LLM to Help With Code Understanding
- URL: http://arxiv.org/abs/2307.08177v3
- Date: Tue, 16 Jan 2024 19:56:18 GMT
- Title: Using an LLM to Help With Code Understanding
- Authors: Daye Nam and Andrew Macvean and Vincent Hellendoorn and Bogdan
Vasilescu and Brad Myers
- Abstract summary: Large language models (LLMs) are revolutionizing the process of writing code.
Our plugin queries OpenAI's GPT-3.5-turbo model with four high-level requests without the user having to write explicit prompts.
We evaluate this system in a user study with 32 participants, which confirms that using our plugin can aid task completion more than web search.
- Score: 13.53616539787915
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding code is challenging, especially when working in new and complex
development environments. Code comments and documentation can help, but are
typically scarce or hard to navigate. Large language models (LLMs) are
revolutionizing the process of writing code. Can they do the same for helping
understand it? In this study, we provide a first investigation of an LLM-based
conversational UI built directly in the IDE that is geared towards code
understanding. Our IDE plugin queries OpenAI's GPT-3.5-turbo model with four
high-level requests without the user having to write explicit prompts: to
explain a highlighted section of code, provide details of API calls used in the
code, explain key domain-specific terms, and provide usage examples for an API.
The plugin also allows for open-ended prompts, which are automatically
contextualized to the LLM with the program being edited. We evaluate this
system in a user study with 32 participants, which confirms that using our
plugin can aid task completion more than web search. We additionally provide a
thorough analysis of the ways developers use, and perceive the usefulness of,
our system, among others finding that the usage and benefits differ between
students and professionals. We conclude that in-IDE prompt-less interaction
with LLMs is a promising future direction for tool builders.
Related papers
- Codellm-Devkit: A Framework for Contextualizing Code LLMs with Program Analysis Insights [9.414198519543564]
We present codellm-devkit (hereafter, CLDK'), an open-source library that significantly simplifies the process of performing program analysis.
CLDK offers developers an intuitive and user-friendly interface, making it incredibly easy to provide rich program analysis context to code LLMs.
arXiv Detail & Related papers (2024-10-16T20:05:59Z) - What You Need is What You Get: Theory of Mind for an LLM-Based Code Understanding Assistant [0.0]
A growing number of tools have used Large Language Models (LLMs) to support developers' code understanding.
In this study, we designed an LLM-based conversational assistant that provides a personalized interaction based on inferred user mental state.
Our results provide insights for researchers and tool builders who want to create or improve LLM-based conversational assistants to support novices in code understanding.
arXiv Detail & Related papers (2024-08-08T14:08:15Z) - BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks [14.640473990776691]
We introduce a novel workflow into computational notebooks that augments LLM-based code generation with an additional ephemeral UI step.
We present this workflow in BISCUIT, an extension for JupyterLab that provides users with ephemeral UIs generated by LLMs.
We found that BISCUIT offers users representations of code to aid their understanding, reduces the complexity of prompt engineering, and creates a playground for users to explore different variables.
arXiv Detail & Related papers (2024-04-10T23:28:09Z) - LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations [26.340786701393768]
Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users' understanding.
Current solutions for dialogue-based explanations, however, often require external tools and modules and are not easily transferable to tasks they were not designed for.
We present an easily accessible tool that allows users to chat with any state-of-the-art large language model (LLM) about its behavior.
arXiv Detail & Related papers (2024-01-23T09:11:07Z) - Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs [65.2379940117181]
We introduce code prompting, a chain of prompts that transforms a natural language problem into code.
We find that code prompting exhibits a high-performance boost for multiple LLMs.
Our analysis of GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement.
arXiv Detail & Related papers (2024-01-18T15:32:24Z) - INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning [59.07490387145391]
Large language models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks.
Their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language.
We introduce a novel instruction tuning dataset, INTERS, encompassing 20 tasks across three fundamental IR categories.
arXiv Detail & Related papers (2024-01-12T12:10:28Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - TaskWeaver: A Code-First Agent Framework [50.99683051759488]
TaskWeaver is a code-first framework for building LLM-powered autonomous agents.
It converts user requests into executable code and treats user-defined plugins as callable functions.
It provides support for rich data structures, flexible plugin usage, and dynamic plugin selection.
arXiv Detail & Related papers (2023-11-29T11:23:42Z) - ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world
APIs [104.37772295581088]
Open-source large language models (LLMs), e.g., LLaMA, remain significantly limited in tool-use capabilities.
We introduce ToolLLM, a general tool-usetuning encompassing data construction, model training, and evaluation.
We first present ToolBench, an instruction-tuning framework for tool use, which is constructed automatically using ChatGPT.
arXiv Detail & Related papers (2023-07-31T15:56:53Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Repository-Level Prompt Generation for Large Language Models of Code [28.98699307030983]
We propose a framework that learns to generate example-specific prompts using prompt proposals.
The prompt proposals take context from the entire repository.
We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives.
arXiv Detail & Related papers (2022-06-26T10:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.