Automated Questions About Learners' Own Code Help to Detect Fragile
Knowledge
- URL: http://arxiv.org/abs/2306.16267v1
- Date: Wed, 28 Jun 2023 14:49:16 GMT
- Title: Automated Questions About Learners' Own Code Help to Detect Fragile
Knowledge
- Authors: Teemu Lehtinen, Otto Sepp\"al\"a, Ari Korhonen
- Abstract summary: Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works.
Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Students are able to produce correctly functioning program code even though
they have a fragile understanding of how it actually works. Questions derived
automatically from individual exercise submissions (QLC) can probe if and how
well the students understand the structure and logic of the code they just
created. Prior research studied this approach in the context of the first
programming course. We replicate the study on a follow-up programming course
for engineering students which contains a recap of general concepts in CS1. The
task was the classic rainfall problem which was solved by 90% of the students.
The QLCs generated from each passing submission were kept intentionally simple,
yet 27% of the students failed in at least one of them. Students who struggled
with questions about their own program logic had a lower median for overall
course points than students who answered correctly.
Related papers
- A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - Probeable Problems for Beginner-level Programming-with-AI Contests [0.0]
We conduct a 2-hour programming contest for undergraduate Computer Science students from multiple institutions.
Students were permitted to work individually or in groups, and were free to use AI tools.
We analyze the extent to which the code submitted by these groups identifies missing details and identify ways in which Probeable Problems can support learning in formal and informal CS educational contexts.
arXiv Detail & Related papers (2024-05-24T00:39:32Z) - Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge Graphs [102.37496443389203]
'Logic-Query-of-Thoughts' (LGOT) is the first of its kind to combine knowledge graph reasoning and large language models.
Our experimental findings demonstrate substantial performance enhancements, with up to 20% improvement over ChatGPT.
arXiv Detail & Related papers (2024-03-17T17:01:45Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - Continuous Examination by Automatic Quiz Assessment Using Spiral Codes
and Image Processing [69.35569554213679]
Paper quizzes are affordable and within reach of campus education in classrooms.
correction of the quiz is a considerable obstacle.
We suggest mitigating the issue by a novel image processing technique.
arXiv Detail & Related papers (2022-01-26T22:58:15Z) - Solving Linear Algebra by Program Synthesis [1.0660480034605238]
We solve MIT's Linear Algebra 18.06 course and Columbia University's Computational Linear Algebra COMS3251 courses with perfect accuracy by interactive program synthesis.
This surprisingly strong result is achieved by turning the course questions into programming tasks and then running the programs to produce the correct answers.
arXiv Detail & Related papers (2021-11-16T01:16:43Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Students Struggle to Explain Their Own Program Code [0.0]
We ask students to explain the structure and execution of their small programs after they submit them to a programming exercise.
One third of the students struggled to explain their own program code.
Our results indicate that answering properly aligned QLCs correctly has stronger correlation with student success and retention than merely submitting a correct program.
arXiv Detail & Related papers (2021-04-14T09:13:05Z) - Let's Ask Students About Their Programs, Automatically [0.0]
Students sometimes produce code that works but that its author does not comprehend.
One way to tackle these issues is to probe students' comprehension by asking them questions about their own programs.
We propose an approach to automatically generate questions about student-written program code.
arXiv Detail & Related papers (2021-03-20T09:15:37Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.