Next-Step Hint Generation for Introductory Programming Using Large
Language Models
- URL: http://arxiv.org/abs/2312.10055v1
- Date: Sun, 3 Dec 2023 17:51:07 GMT
- Title: Next-Step Hint Generation for Introductory Programming Using Large
Language Models
- Authors: Lianne Roest, Hieke Keuning, Johan Jeuring
- Abstract summary: Large Language Models possess skills such as answering questions, writing essays or solving programming exercises.
This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints.
- Score: 0.8002196839441036
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models possess skills such as answering questions, writing
essays or solving programming exercises. Since these models are easily
accessible, researchers have investigated their capabilities and risks for
programming education. This work explores how LLMs can contribute to
programming education by supporting students with automated next-step hints. We
investigate prompt practices that lead to effective next-step hints and use
these insights to build our StAP-tutor. We evaluate this tutor by conducting an
experiment with students, and performing expert assessments. Our findings show
that most LLM-generated feedback messages describe one specific next step and
are personalised to the student's code and approach. However, the hints may
contain misleading information and lack sufficient detail when students
approach the end of the assignment. This work demonstrates the potential for
LLM-generated feedback, but further research is required to explore its
practical implementation.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions [2.377308748205625]
We explore the capability of the state-of-the-art LLMs in answering QLCs that are generated from code that the LLMs have created.
Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers.
arXiv Detail & Related papers (2024-04-17T20:37:00Z) - Analyzing LLM Usage in an Advanced Computing Class in India [4.580708389528142]
This study examines the use of large language models (LLMs) by undergraduate and graduate students for programming assignments in advanced computing classes.
We conducted a comprehensive analysis involving 411 students from a Distributed Systems class at an Indian university.
arXiv Detail & Related papers (2024-04-06T12:06:56Z) - Exploring How Multiple Levels of GPT-Generated Programming Hints Support or Disappoint Novices [0.0]
We investigated whether different levels of hints can support students' problem-solving and learning.
We conducted a think-aloud study with 12 novices using the LLM Hint Factory.
We discovered that high-level natural language hints alone can be helpless or even misleading.
arXiv Detail & Related papers (2024-04-02T18:05:26Z) - INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning [59.07490387145391]
Large language models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks.
Their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language.
We introduce a novel instruction tuning dataset, INTERS, encompassing 20 tasks across three fundamental IR categories.
arXiv Detail & Related papers (2024-01-12T12:10:28Z) - Exploring the Potential of Large Language Models in Generating
Code-Tracing Questions for Introductory Programming Courses [6.43363776610849]
Large language models (LLMs) can be used to generate code-tracing questions in programming courses.
We present a dataset of human and LLM-generated tracing questions, serving as a valuable resource for both the education and NLP research communities.
arXiv Detail & Related papers (2023-10-23T19:35:01Z) - Exploring the Potential of Large Language Models to Generate Formative
Programming Feedback [0.5371337604556311]
We explore the potential of large language models (LLMs) for computing educators and learners.
To achieve these goals, we used students' programming sequences from a dataset gathered within a CS1 course as input for ChatGPT.
Results show that ChatGPT performs reasonably well for some of the introductory programming tasks and student errors.
However, educators should provide guidance on how to use the provided feedback, as it can contain misleading information for novices.
arXiv Detail & Related papers (2023-08-31T15:22:11Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications.
We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.
arXiv Detail & Related papers (2023-08-21T15:35:16Z) - Aligning Large Language Models with Human: A Survey [53.6014921995006]
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks.
Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect information.
This survey presents a comprehensive overview of these alignment technologies, including the following aspects.
arXiv Detail & Related papers (2023-07-24T17:44:58Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A
Preliminary Study on Writing Assistance [60.40541387785977]
Small foundational models can display remarkable proficiency in tackling diverse tasks when fine-tuned using instruction-driven data.
In this work, we investigate a practical problem setting where the primary focus is on one or a few particular tasks rather than general-purpose instruction following.
Experimental results show that fine-tuning LLaMA on writing instruction data significantly improves its ability on writing tasks.
arXiv Detail & Related papers (2023-05-22T16:56:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.