Improving Students With Rubric-Based Self-Assessment and Oral Feedback
- URL: http://arxiv.org/abs/2307.12849v1
- Date: Mon, 24 Jul 2023 14:48:28 GMT
- Title: Improving Students With Rubric-Based Self-Assessment and Oral Feedback
- Authors: Sebastian Barney, Mahvish Khurum, Kai Petersen, Michael
Unterkalmsteiner, Ronald Jabangwe
- Abstract summary: rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes.
This paper evaluates the effect of rubrics and oral feedback on student learning outcomes.
- Score: 2.808134646037882
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Rubrics and oral feedback are approaches to help students improve performance
and meet learning outcomes. However, their effect on the actual improvement
achieved is inconclusive. This paper evaluates the effect of rubrics and oral
feedback on student learning outcomes. An experiment was conducted in a
software engineering course on requirements engineering, using the two
approaches in course assignments. Both approaches led to statistically
significant improvements, though no material improvement (i.e., a change by
more than one grade) was achieved. The rubrics led to a significant decrease in
the number of complaints and questions regarding grades.
Related papers
- Enhancing Students' Learning Process Through Self-Generated Tests [0.0]
This paper describes an educational experiment aimed at the promotion of students' autonomous learning.
The main idea is to make the student feel part of the evaluation process by including students' questions in the evaluation exams.
Questions uploaded by students are visible to every enrolled student as well as to each involved teacher.
arXiv Detail & Related papers (2024-03-21T09:49:33Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Does Starting Deep Learning Homework Earlier Improve Grades? [63.20583929886827]
Students who start a homework assignment earlier and spend more time on it should receive better grades on the assignment.
Existing literature on the impact of time spent on homework is not clear-cut and comes mostly from K-12 education.
We develop a hierarchical Bayesian model to help make principled conclusions about the impact on student success.
arXiv Detail & Related papers (2023-09-30T09:34:30Z) - Adam: Dense Retrieval Distillation with Adaptive Dark Examples [104.01735794498767]
We propose ADAM, a knowledge distillation framework that can better transfer the dark knowledge held in the teacher with Adaptive Dark exAMples.
We conduct experiments on two widely-used benchmarks and verify the effectiveness of our method.
arXiv Detail & Related papers (2022-12-20T12:03:19Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder [1.329950749508442]
This paper studies the answers to the standardized university evaluation questionnaires of foundational computer science courses which recently introduced autograding.
We hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty.
The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.
arXiv Detail & Related papers (2021-10-28T14:09:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style [0.0]
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses.
There is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback.
This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance.
arXiv Detail & Related papers (2020-11-20T21:40:32Z) - Leveraging Peer Feedback to Improve Visualization Education [4.679788938455095]
We discuss the construction and application of peer review in a computer science visualization course.
We evaluate student projects, peer review text, and a post-course questionnaire from 3 semesters of mixed undergraduate and graduate courses.
arXiv Detail & Related papers (2020-01-12T21:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.