Explaining Code Examples in Introductory Programming Courses: LLM vs
Humans
- URL: http://arxiv.org/abs/2403.05538v2
- Date: Tue, 12 Mar 2024 02:06:25 GMT
- Title: Explaining Code Examples in Introductory Programming Courses: LLM vs
Humans
- Authors: Arun-Balajiee Lekshmi-Narayanan, Priti Oli, Jeevan Chapagain, Mohammad
Hassany, Rabin Banjade, Peter Brusilovsky, Vasile Rus
- Abstract summary: We assess the feasibility of using LLMs to generate code explanations for passive and active example exploration systems.
To achieve this goal, we compare the code explanations generated by chatGPT with the explanations generated by both experts and students.
- Score: 1.6431142588286851
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Worked examples, which present an explained code for solving typical
programming problems are among the most popular types of learning content in
programming classes. Most approaches and tools for presenting these examples to
students are based on line-by-line explanations of the example code. However,
instructors rarely have time to provide explanations for many examples
typically used in a programming class. In this paper, we assess the feasibility
of using LLMs to generate code explanations for passive and active example
exploration systems. To achieve this goal, we compare the code explanations
generated by chatGPT with the explanations generated by both experts and
students.
Related papers
- Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Human-AI Co-Creation of Worked Examples for Programming Classes [1.5663705658818543]
We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
arXiv Detail & Related papers (2024-02-26T01:44:24Z) - Authoring Worked Examples for Java Programming with Human-AI
Collaboration [1.5663705658818543]
We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
arXiv Detail & Related papers (2023-12-04T18:32:55Z) - Code Generation Based Grading: Evaluating an Auto-grading Mechanism for
"Explain-in-Plain-English" Questions [0.0]
"Code Generation Based Grading" (CGBG) achieves moderate agreement with human graders.
CGBG achieves moderate agreement with human graders with respect to low-level and line-by-line descriptions of code.
arXiv Detail & Related papers (2023-11-25T02:45:00Z) - Enabling Large Language Models to Learn from Rules [99.16680531261987]
We are inspired that humans can learn the new tasks or knowledge in another way by learning from rules.
We propose rule distillation, which first uses the strong in-context abilities of LLMs to extract the knowledge from the textual rules.
Our experiments show that making LLMs learn from rules by our method is much more efficient than example-based learning in both the sample size and generalization ability.
arXiv Detail & Related papers (2023-11-15T11:42:41Z) - Retrieval-Augmented Code Generation for Universal Information Extraction [66.68673051922497]
Information Extraction aims to extract structural knowledge from natural language texts.
We propose a universal retrieval-augmented code generation framework based on Large Language Models (LLMs)
Code4UIE adopts Python classes to define task-specific schemas of various structural knowledge in a universal way.
arXiv Detail & Related papers (2023-11-06T09:03:21Z) - The Behavior of Large Language Models When Prompted to Generate Code
Explanations [0.3293989832773954]
This paper systematically investigates the generation of code explanations by Large Language Models (LLMs)
Our findings reveal significant variations in the nature of code explanations produced by LLMs, influenced by factors such as the wording of the prompt.
A consistent pattern emerges for Java and Python, where explanations exhibit a Flesch-Kincaid readability level of approximately 7-8 grade.
arXiv Detail & Related papers (2023-11-02T17:14:38Z) - An In-Context Schema Understanding Method for Knowledge Base Question
Answering [70.87993081445127]
Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task.
Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.
We propose a simple In-Context Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning.
arXiv Detail & Related papers (2023-10-22T04:19:17Z) - From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning [63.63840740526497]
We investigate how instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
The impact of instruction tuning is then studied by comparing the explanations derived from the pre-trained and instruction-tuned models.
Our findings reveal three significant impacts of instruction tuning.
arXiv Detail & Related papers (2023-09-30T21:16:05Z) - Comparing Code Explanations Created by Students and Large Language
Models [4.526618922750769]
Reasoning about code and explaining its purpose are fundamental skills for computer scientists.
The ability to describe at a high-level of abstraction how code will behave over all possible inputs correlates strongly with code writing skills.
Existing pedagogical approaches that scaffold the ability to explain code, such as producing code explanations on demand, do not currently scale well to large classrooms.
arXiv Detail & Related papers (2023-04-08T06:52:54Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.