Explaining Code with a Purpose: An Integrated Approach for Developing
Code Comprehension and Prompting Skills
- URL: http://arxiv.org/abs/2403.06050v1
- Date: Sun, 10 Mar 2024 00:23:08 GMT
- Title: Explaining Code with a Purpose: An Integrated Approach for Developing
Code Comprehension and Prompting Skills
- Authors: Paul Denny and David H. Smith IV and Max Fowler and James Prather and
Brett A. Becker and Juho Leinonen
- Abstract summary: We propose using an LLM to generate code based on students' responses to EiPE questions.
We report student success in creating effective prompts for solving EiPE questions.
- Score: 4.776920192249936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reading, understanding and explaining code have traditionally been important
skills for novices learning programming. As large language models (LLMs) become
prevalent, these foundational skills are more important than ever given the
increasing need to understand and evaluate model-generated code. Brand new
skills are also needed, such as the ability to formulate clear prompts that can
elicit intended code from an LLM. Thus, there is great interest in integrating
pedagogical approaches for the development of both traditional coding
competencies and the novel skills required to interact with LLMs. One effective
way to develop and assess code comprehension ability is with ``Explain in plain
English'' (EiPE) questions, where students succinctly explain the purpose of a
fragment of code. However, grading EiPE questions has always been difficult
given the subjective nature of evaluating written explanations and this has
stifled their uptake. In this paper, we explore a natural synergy between EiPE
questions and code-generating LLMs to overcome this limitation. We propose
using an LLM to generate code based on students' responses to EiPE questions --
not only enabling EiPE responses to be assessed automatically, but helping
students develop essential code comprehension and prompt crafting skills in
parallel. We investigate this idea in an introductory programming course and
report student success in creating effective prompts for solving EiPE
questions. We also examine student perceptions of this activity and how it
influences their views on the use of LLMs for aiding and assessing learning.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - What You Need is What You Get: Theory of Mind for an LLM-Based Code Understanding Assistant [0.0]
A growing number of tools have used Large Language Models (LLMs) to support developers' code understanding.
In this study, we designed an LLM-based conversational assistant that provides a personalized interaction based on inferred user mental state.
Our results provide insights for researchers and tool builders who want to create or improve LLM-based conversational assistants to support novices in code understanding.
arXiv Detail & Related papers (2024-08-08T14:08:15Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Automate Knowledge Concept Tagging on Math Questions with LLMs [48.5585921817745]
Knowledge concept tagging for questions plays a crucial role in contemporary intelligent educational applications.
Traditionally, these annotations have been conducted manually with help from pedagogical experts.
In this paper, we explore the automating the tagging task using Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-26T00:09:38Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - When LLMs Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models [59.84769254832941]
We propose a FaLlacy Understanding Benchmark (FLUB) containing cunning texts that are easy for humans to understand but difficult for models to grasp.
Specifically, the cunning texts that FLUB focuses on mainly consist of the tricky, humorous, and misleading texts collected from the real internet environment.
Based on FLUB, we investigate the performance of multiple representative and advanced LLMs.
arXiv Detail & Related papers (2024-02-16T22:12:53Z) - Code Generation Based Grading: Evaluating an Auto-grading Mechanism for
"Explain-in-Plain-English" Questions [0.0]
"Code Generation Based Grading" (CGBG) achieves moderate agreement with human graders.
CGBG achieves moderate agreement with human graders with respect to low-level and line-by-line descriptions of code.
arXiv Detail & Related papers (2023-11-25T02:45:00Z) - Promptly: Using Prompt Problems to Teach Learners How to Effectively
Utilize AI Code Generators [5.458849730200646]
This paper introduces a novel pedagogical concept known as a Prompt Problem'
A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem.
We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course.
arXiv Detail & Related papers (2023-07-31T01:46:42Z) - Knowledgeable Salient Span Mask for Enhancing Language Models as
Knowledge Base [51.55027623439027]
We develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner.
To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training.
arXiv Detail & Related papers (2022-04-17T12:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.