Write a Line: Tests with Answer Templates and String Completion Hints
for Self-Learning in a CS1 Course
- URL: http://arxiv.org/abs/2204.09036v1
- Date: Tue, 19 Apr 2022 17:53:35 GMT
- Title: Write a Line: Tests with Answer Templates and String Completion Hints
for Self-Learning in a CS1 Course
- Authors: Oleg Sychev
- Abstract summary: This paper reports the results of using regular-expression-based questions with string completion hints in a CS1 course for 4 years with 497 students.
The evaluation results show that Perl-compatible regular expressions provide good precision and recall (more than 99%) when used for questions requiring writing a single line of code.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: One of the important scaffolding tasks in programming learning is writing a
line of code performing the necessary action. This allows students to practice
skills in a playground with instant feedback before writing more complex
programs and increases their proficiency when solving programming problems.
However, answers in the form of program code have high variability. Among the
possible approaches to grading and providing feedback, we chose template
matching. This paper reports the results of using regular-expression-based
questions with string completion hints in a CS1 course for 4 years with 497
students. The evaluation results show that Perl-compatible regular expressions
provide good precision and recall (more than 99\%) when used for questions
requiring writing a single line of code while being able to provide
string-completion feedback regardless of how wrong the initial student's answer
is. After introducing formative quizzes with string-completion hints to the
course, the number of questions that teachers and teaching assistants received
about questions in the formative quizzes dropped considerably: most of the
training question attempts resulted in finding the correct answer without help
from the teaching staff. However, some of the students use formative quizzes
just to learn correct answers without actually trying to answer the questions.
Related papers
- Code Interviews: Design and Evaluation of a More Authentic Assessment for Introductory Programming Assignments [15.295438618760164]
We describe code interviews as a more authentic assessment method for take-home programming assignments.
Code interviews pushed students to discuss their work, motivating more nuanced but sometimes repetitive insights.
We conclude by discussing the different decisions about the design of code interviews with implications for student experience, academic integrity, and teaching workload.
arXiv Detail & Related papers (2024-10-01T19:01:41Z) - A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - CS1QA: A Dataset for Assisting Code-based Question Answering in an
Introductory Programming Course [13.61096948994569]
CS1QA consists of 9,237 question-answer pairs gathered from chat logs in an introductory programming class using Python.
Each question is accompanied with the student's code, and the portion of the code relevant to answering the question.
arXiv Detail & Related papers (2022-10-26T05:40:34Z) - Solving Linear Algebra by Program Synthesis [1.0660480034605238]
We solve MIT's Linear Algebra 18.06 course and Columbia University's Computational Linear Algebra COMS3251 courses with perfect accuracy by interactive program synthesis.
This surprisingly strong result is achieved by turning the course questions into programming tasks and then running the programs to produce the correct answers.
arXiv Detail & Related papers (2021-11-16T01:16:43Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.