Let's Ask Students About Their Programs, Automatically
- URL: http://arxiv.org/abs/2103.11138v1
- Date: Sat, 20 Mar 2021 09:15:37 GMT
- Title: Let's Ask Students About Their Programs, Automatically
- Authors: Teemu Lehtinen and Andr\'e L. Santos and Juha Sorva
- Abstract summary: Students sometimes produce code that works but that its author does not comprehend.
One way to tackle these issues is to probe students' comprehension by asking them questions about their own programs.
We propose an approach to automatically generate questions about student-written program code.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Students sometimes produce code that works but that its author does not
comprehend. For example, a student may apply a poorly-understood code template,
stumble upon a working solution through trial and error, or plagiarize.
Similarly, passing an automated functional assessment does not guarantee that
the student understands their code. One way to tackle these issues is to probe
students' comprehension by asking them questions about their own programs. We
propose an approach to automatically generate questions about student-written
program code. We moreover propose a use case for such questions in the context
of automatic assessment systems: after a student's program passes unit tests,
the system poses questions to the student about the code. We suggest that these
questions can enhance assessment systems, deepen student learning by acting as
self-explanation prompts, and provide a window into students' program
comprehension. This discussion paper sets an agenda for future technical
development and empirical research on the topic.
Related papers
- A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - Using Large Language Models for Student-Code Guided Test Case Generation
in Computer Science Education [2.5382095320488665]
Test cases are an integral part of programming assignments in computer science education.
Test cases can be used as assessment items to test students' programming knowledge and provide personalized feedback on student-written code.
We propose a large language model-based approach to automatically generate test cases.
arXiv Detail & Related papers (2024-02-11T01:37:48Z) - Automated Questions About Learners' Own Code Help to Detect Fragile
Knowledge [0.0]
Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works.
Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created.
arXiv Detail & Related papers (2023-06-28T14:49:16Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - GPT-based Open-Ended Knowledge Tracing [24.822739021636455]
We study the new task of predicting students' exact open-ended responses to questions.
Our work is grounded in the domain of computer science education with programming questions.
We develop an initial solution to the OKT problem, a student knowledge-guided code generation approach.
arXiv Detail & Related papers (2022-02-21T02:33:34Z) - Continuous Examination by Automatic Quiz Assessment Using Spiral Codes
and Image Processing [69.35569554213679]
Paper quizzes are affordable and within reach of campus education in classrooms.
correction of the quiz is a considerable obstacle.
We suggest mitigating the issue by a novel image processing technique.
arXiv Detail & Related papers (2022-01-26T22:58:15Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Students Struggle to Explain Their Own Program Code [0.0]
We ask students to explain the structure and execution of their small programs after they submit them to a programming exercise.
One third of the students struggled to explain their own program code.
Our results indicate that answering properly aligned QLCs correctly has stronger correlation with student success and retention than merely submitting a correct program.
arXiv Detail & Related papers (2021-04-14T09:13:05Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z) - Code Review in the Classroom [57.300604527924015]
Young developers in a classroom setting provide a clear picture of the potential favourable and problematic areas of the code review process.
Their feedback suggests that the process has been well received with some points to better the process.
This paper can be used as guidelines to perform code reviews in the classroom.
arXiv Detail & Related papers (2020-04-19T06:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.