Promptly: Using Prompt Problems to Teach Learners How to Effectively
Utilize AI Code Generators
- URL: http://arxiv.org/abs/2307.16364v1
- Date: Mon, 31 Jul 2023 01:46:42 GMT
- Title: Promptly: Using Prompt Problems to Teach Learners How to Effectively
Utilize AI Code Generators
- Authors: Paul Denny and Juho Leinonen and James Prather and Andrew
Luxton-Reilly and Thezyrie Amarouche and Brett A. Becker and Brent N. Reeves
- Abstract summary: This paper introduces a novel pedagogical concept known as a Prompt Problem'
A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem.
We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course.
- Score: 5.458849730200646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice.
Related papers
- BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - Integrating Natural Language Prompting Tasks in Introductory Programming Courses [3.907735250728617]
This report explores the inclusion of two prompt-focused activities in an introductory programming course.
The first requires students to solve computational problems by writing natural language prompts, emphasizing problem-solving over syntax.
The second involves students crafting prompts to generate code equivalent to provided fragments, to foster an understanding of the relationship between prompts and code.
arXiv Detail & Related papers (2024-10-04T01:03:25Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions [2.377308748205625]
We explore the capability of the state-of-the-art LLMs in answering QLCs that are generated from code that the LLMs have created.
Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers.
arXiv Detail & Related papers (2024-04-17T20:37:00Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.171011917404485]
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks.
This approach brings the additional computational burden of model inference and human effort to guide and control the behavior of LLMs.
We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - Explaining Code with a Purpose: An Integrated Approach for Developing
Code Comprehension and Prompting Skills [4.776920192249936]
We propose using an LLM to generate code based on students' responses to EiPE questions.
We report student success in creating effective prompts for solving EiPE questions.
arXiv Detail & Related papers (2024-03-10T00:23:08Z) - CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming [6.43344619836303]
Generative AI can create a solution for most intro-level programming problems.
Students might use these tools to just generate code for them, resulting in reduced engagement and limited learning.
We present CodeTailor, a system that leverages a large language model (LLM) to provide personalized help to students.
arXiv Detail & Related papers (2024-01-22T17:08:54Z) - Interactions with Prompt Problems: A New Way to Teach Programming with
Large Language Models [4.1599514827277355]
We propose a new way to teach programming with Prompt Problems.
Students receive a problem visually, indicating how input should be transformed to output, and must translate that to a prompt for an LLM to decipher.
The problem is considered correct when the code that is generated by the student prompt can pass all test cases.
arXiv Detail & Related papers (2024-01-19T15:32:46Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Dive into Deep Learning [119.30375933463156]
The book is drafted in Jupyter notebooks, seamlessly integrating exposition figures, math, and interactive examples with self-contained code.
Our goal is to offer a resource that could (i) be freely available for everyone; (ii) offer sufficient technical depth to provide a starting point on the path to becoming an applied machine learning scientist; (iii) include runnable code, showing readers how to solve problems in practice; (iv) allow for rapid updates, both by us and also by the community at large.
arXiv Detail & Related papers (2021-06-21T18:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.