Human-AI Co-Creation of Worked Examples for Programming Classes
- URL: http://arxiv.org/abs/2402.16235v2
- Date: Thu, 29 Feb 2024 05:22:01 GMT
- Title: Human-AI Co-Creation of Worked Examples for Programming Classes
- Authors: Mohammad Hassany, Peter Brusilovsky, Jiaze Ke, Kamil Akhuseyinoglu and
Arun Balajiee Lekshmi Narayanan
- Abstract summary: We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
- Score: 1.5663705658818543
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Worked examples (solutions to typical programming problems presented as a
source code in a certain language and are used to explain the topics from a
programming class) are among the most popular types of learning content in
programming classes. Most approaches and tools for presenting these examples to
students are based on line-by-line explanations of the example code. However,
instructors rarely have time to provide line-by-line explanations for a large
number of examples typically used in a programming class. In this paper, we
explore and assess a human-AI collaboration approach to authoring worked
examples for Java programming. We introduce an authoring system for creating
Java worked examples that generates a starting version of code explanations and
presents it to the instructor to edit if necessary.We also present a study that
assesses the quality of explanations created with this approach
Related papers
- Exploring the Impact of Source Code Linearity on the Programmers Comprehension of API Code Examples [0.0]
We investigated whether the (a) linearity and (b) length of the source code in API code examples affect users performance in terms of correctness and time spent.
We conducted an online controlled code comprehension experiment with 61 Java developers.
arXiv Detail & Related papers (2024-04-03T00:40:38Z) - Explaining Code Examples in Introductory Programming Courses: LLM vs
Humans [1.6431142588286851]
We assess the feasibility of using LLMs to generate code explanations for passive and active example exploration systems.
To achieve this goal, we compare the code explanations generated by chatGPT with the explanations generated by both experts and students.
arXiv Detail & Related papers (2023-12-09T01:06:08Z) - Authoring Worked Examples for Java Programming with Human-AI
Collaboration [1.5663705658818543]
We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
arXiv Detail & Related papers (2023-12-04T18:32:55Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - The Integer Linear Programming Inference Cookbook [108.82092464025231]
This survey is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program.
At the end, we will see two worked examples to illustrate the use of these recipes.
arXiv Detail & Related papers (2023-06-30T23:33:11Z) - Python Code Generation by Asking Clarification Questions [57.63906360576212]
In this work, we introduce a novel and more realistic setup for this task.
We hypothesize that the under-specification of a natural language description can be resolved by asking clarification questions.
We collect and introduce a new dataset named CodeClarQA containing pairs of natural language descriptions and code with created synthetic clarification questions and answers.
arXiv Detail & Related papers (2022-12-19T22:08:36Z) - Automatic Generation of Programming Exercises and Code Explanations with
Large Language Models [4.947560475228859]
OpenAI Codex is a recent large language model from the GPT-3 family for translating code into natural language.
We explore the natural language generation capabilities of Codex in two different phases of the life of a programming exercise.
We find the majority of this automatically generated content both novel and sensible, and in many cases ready to use as is.
arXiv Detail & Related papers (2022-06-03T11:00:43Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Retrieve and Refine: Exemplar-based Neural Comment Generation [27.90756259321855]
Comments of similar code snippets are helpful for comment generation.
We design a novel seq2seq neural network that takes the given code, its AST, its similar code, and its exemplar as input.
We evaluate our approach on a large-scale Java corpus, which contains about 2M samples.
arXiv Detail & Related papers (2020-10-09T09:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.