Interactions with Prompt Problems: A New Way to Teach Programming with
Large Language Models
- URL: http://arxiv.org/abs/2401.10759v1
- Date: Fri, 19 Jan 2024 15:32:46 GMT
- Title: Interactions with Prompt Problems: A New Way to Teach Programming with
Large Language Models
- Authors: James Prather, Paul Denny, Juho Leinonen, David H. Smith IV, Brent N.
Reeves, Stephen MacNeil, Brett A. Becker, Andrew Luxton-Reilly, Thezyrie
Amarouche, Bailey Kimmel
- Abstract summary: We propose a new way to teach programming with Prompt Problems.
Students receive a problem visually, indicating how input should be transformed to output, and must translate that to a prompt for an LLM to decipher.
The problem is considered correct when the code that is generated by the student prompt can pass all test cases.
- Score: 4.1599514827277355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have upended decades of pedagogy in computing
education. Students previously learned to code through \textit{writing} many
small problems with less emphasis on code reading and comprehension. Recent
research has shown that free code generation tools powered by LLMs can solve
introductory programming problems presented in natural language with ease. In
this paper, we propose a new way to teach programming with Prompt Problems.
Students receive a problem visually, indicating how input should be transformed
to output, and must translate that to a prompt for an LLM to decipher. The
problem is considered correct when the code that is generated by the student
prompt can pass all test cases. In this paper we present the design of this
tool, discuss student interactions with it as they learn, and provide insights
into this new class of programming problems as well as the design tools that
integrate LLMs.
Related papers
- Lean Workbook: A large-scale Lean problem set formalized from natural language math problems [50.22847430754973]
Large language models are not good at math theorem proving using formal languages like Lean.
A significant challenge in this area is the scarcity of training data available in these formal languages.
We propose a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems into Lean 4 statements.
arXiv Detail & Related papers (2024-06-06T08:25:43Z) - Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions [2.377308748205625]
We explore the capability of the state-of-the-art LLMs in answering QLCs that are generated from code that the LLMs have created.
Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers.
arXiv Detail & Related papers (2024-04-17T20:37:00Z) - CS1-LLM: Integrating LLMs into CS1 Instruction [0.6282171844772422]
This experience report describes a CS1 course at a large research-intensive university that fully embraces the use of Large Language Models.
To incorporate the LLMs, the course was intentionally altered to reduce emphasis on syntax and writing code from scratch.
Students were given three large, open-ended projects in three separate domains that allowed them to showcase their creativity.
arXiv Detail & Related papers (2024-04-17T14:44:28Z) - Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Knowledge-Aware Code Generation with Large Language Models [34.806454393643236]
Large Language Models (LLMs) perform well on basic programming problems.
However, they encounter challenges when dealing with complex tasks involving the use of diverse algorithmic and data structure skills.
We develop a Knowledge Library tailored for Python programming contest problems and introduce the concept of Knowledge-Aware Code Generation.
arXiv Detail & Related papers (2024-01-29T08:01:22Z) - Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs [65.2379940117181]
We introduce code prompting, a chain of prompts that transforms a natural language problem into code.
We find that code prompting exhibits a high-performance boost for multiple LLMs.
Our analysis of GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement.
arXiv Detail & Related papers (2024-01-18T15:32:24Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Promptly: Using Prompt Problems to Teach Learners How to Effectively
Utilize AI Code Generators [5.458849730200646]
This paper introduces a novel pedagogical concept known as a Prompt Problem'
A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem.
We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course.
arXiv Detail & Related papers (2023-07-31T01:46:42Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Automatically Generating CS Learning Materials with Large Language
Models [4.526618922750769]
Large Language Models (LLMs) enable software developers to generate code based on a natural language prompt.
LLMs may enable students to interact with code in new ways while helping instructors scale their learning materials.
LLMs also introduce new implications for academic integrity, curriculum design, and software engineering careers.
arXiv Detail & Related papers (2022-12-09T20:37:44Z) - PAL: Program-aided Language Models [112.94785609781503]
We present Program-Aided Language models (PaL) to understand natural language problems.
PaL offloads the solution step to a programmatic runtime such as a Python interpreter.
We set new state-of-the-art results in all 12 benchmarks.
arXiv Detail & Related papers (2022-11-18T18:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.