Students Struggle to Explain Their Own Program Code
- URL: http://arxiv.org/abs/2104.06710v1
- Date: Wed, 14 Apr 2021 09:13:05 GMT
- Title: Students Struggle to Explain Their Own Program Code
- Authors: Teemu Lehtinen, Aleksi Lukkarinen, Lassi Haaranen
- Abstract summary: We ask students to explain the structure and execution of their small programs after they submit them to a programming exercise.
One third of the students struggled to explain their own program code.
Our results indicate that answering properly aligned QLCs correctly has stronger correlation with student success and retention than merely submitting a correct program.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We asked students to explain the structure and execution of their small
programs after they had submitted them to a programming exercise. These
questions about learner's code (QLCs) were delivered at three occasions in an
online and open course in introductory programming as a part of the digital
learning material. We make inductive content analysis to research the
open-ended text answers we collected. One third of the students struggled to
explain their own program code. This estimates possible occurrences of fragile
learning at the moment when a student seemingly succeeds in a program writing
exercise. Furthermore, we examine correlations between the correctness of the
answers with other learning data. Our results indicate that answering properly
aligned QLCs correctly has stronger correlation with student success and
retention than merely submitting a correct program. Additionally, we present
observations on learning event-driven programming to explore QLCs' potential in
identifying students' thinking process.
Related papers
- A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions [2.377308748205625]
We explore the capability of the state-of-the-art LLMs in answering QLCs that are generated from code that the LLMs have created.
Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers.
arXiv Detail & Related papers (2024-04-17T20:37:00Z) - Exploring the Potential of Large Language Models to Generate Formative
Programming Feedback [0.5371337604556311]
We explore the potential of large language models (LLMs) for computing educators and learners.
To achieve these goals, we used students' programming sequences from a dataset gathered within a CS1 course as input for ChatGPT.
Results show that ChatGPT performs reasonably well for some of the introductory programming tasks and student errors.
However, educators should provide guidance on how to use the provided feedback, as it can contain misleading information for novices.
arXiv Detail & Related papers (2023-08-31T15:22:11Z) - Automated Questions About Learners' Own Code Help to Detect Fragile
Knowledge [0.0]
Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works.
Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created.
arXiv Detail & Related papers (2023-06-28T14:49:16Z) - Fact-Checking Complex Claims with Program-Guided Reasoning [99.7212240712869]
Program-Guided Fact-Checking (ProgramFC) is a novel fact-checking model that decomposes complex claims into simpler sub-tasks.
We first leverage the in-context learning ability of large language models to generate reasoning programs.
We execute the program by delegating each sub-task to the corresponding sub-task handler.
arXiv Detail & Related papers (2023-05-22T06:11:15Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv Detail & Related papers (2023-01-30T14:50:46Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Learning from Self-Sampled Correct and Partially-Correct Programs [96.66452896657991]
We propose to let the model perform sampling during training and learn from both self-sampled fully-correct programs and partially-correct programs.
We show that our use of self-sampled correct and partially-correct programs can benefit learning and help guide the sampling process.
Our proposed method improves the pass@k performance by 3.1% to 12.3% compared to learning from a single reference program with MLE.
arXiv Detail & Related papers (2022-05-28T03:31:07Z) - An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder [1.329950749508442]
This paper studies the answers to the standardized university evaluation questionnaires of foundational computer science courses which recently introduced autograding.
We hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty.
The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.
arXiv Detail & Related papers (2021-10-28T14:09:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.