Does Starting Deep Learning Homework Earlier Improve Grades?
- URL: http://arxiv.org/abs/2311.09228v1
- Date: Sat, 30 Sep 2023 09:34:30 GMT
- Title: Does Starting Deep Learning Homework Earlier Improve Grades?
- Authors: Edward Raff, Cynthia Matuszek
- Abstract summary: Students who start a homework assignment earlier and spend more time on it should receive better grades on the assignment.
Existing literature on the impact of time spent on homework is not clear-cut and comes mostly from K-12 education.
We develop a hierarchical Bayesian model to help make principled conclusions about the impact on student success.
- Score: 63.20583929886827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intuitively, students who start a homework assignment earlier and spend more
time on it should receive better grades on the assignment. However, existing
literature on the impact of time spent on homework is not clear-cut and comes
mostly from K-12 education. It is not clear that these prior studies can inform
coursework in deep learning due to differences in demographics, as well as the
computational time needed for assignments to be completed. We study this
problem in a post-hoc study of three semesters of a deep learning course at the
University of Maryland, Baltimore County (UMBC), and develop a hierarchical
Bayesian model to help make principled conclusions about the impact on student
success given an approximate measure of the total time spent on the homework,
and how early they submitted the assignment. Our results show that both
submitting early and spending more time positively relate with final grade.
Surprisingly, the value of an additional day of work is apparently equal across
students, even when some require less total time to complete an assignment.
Related papers
- GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes [80.60912258178045]
We developed a prompting strategy that enables GPT-4 to conduct interactive homework sessions for high-school students learning English as a second language.
We carried out a Randomized Controlled Trial (RCT) in four high-school classes, replacing traditional homework with GPT-4 homework sessions for the treatment group.
We observed significant improvements in learning outcomes, specifically a greater gain in grammar, and student engagement.
arXiv Detail & Related papers (2024-09-24T11:22:55Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Using Assignment Incentives to Reduce Student Procrastination and
Encourage Code Review Interactions [2.1684358357001465]
This work presents an incentive system encouraging students to complete assignments many days before deadlines.
Completed assignments are code reviewed by staff for correctness and providing feedback, which results in more student-instructor interactions.
The incentives result in a change in student behavior with 45% of assignments completed early and 30% up to 4 days before the deadline.
arXiv Detail & Related papers (2023-11-25T22:17:40Z) - Improving Students With Rubric-Based Self-Assessment and Oral Feedback [2.808134646037882]
rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes.
This paper evaluates the effect of rubrics and oral feedback on student learning outcomes.
arXiv Detail & Related papers (2023-07-24T14:48:28Z) - Identifying Different Student Clusters in Functional Programming
Assignments: From Quick Learners to Struggling Students [2.0386745041807033]
We analyze student assignment submission data collected from a functional programming course taught at McGill university.
This allows us to identify four clusters of students: "Quick-learning", "Hardworking", "Satisficing", and "Struggling"
We then analyze how work habits, working duration, the range of errors, and the ability to fix errors impact different clusters of students.
arXiv Detail & Related papers (2023-01-06T17:15:58Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge
Distillation [70.92135839545314]
We propose the dynamic prior knowledge (DPK), which integrates part of teacher's features as the prior knowledge before the feature distillation.
Our DPK makes the performance of the student model positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers.
arXiv Detail & Related papers (2022-06-13T11:52:13Z) - Continual Learning in the Teacher-Student Setup: Impact of Task
Similarity [5.1135133995376085]
We study catastrophic forgetting in two-layer networks in the teacher-student setup.
We find that when tasks depend on similar features, intermediate task similarity leads to greatest forgetting.
We find a complex interplay between both types of similarity, initial transfer/forgetting rates, maximum transfer/forgetting, and long-term transfer/forgetting.
arXiv Detail & Related papers (2021-07-09T12:30:39Z) - Does Knowledge Distillation Really Work? [106.38447017262183]
We show that while knowledge distillation can improve student generalization, it does not typically work as it is commonly understood.
We identify difficulties in optimization as a key reason for why the student is unable to match the teacher.
arXiv Detail & Related papers (2021-06-10T17:44:02Z) - Graduate Employment Prediction with Bias [44.38256197478875]
Failure of landing a job for college students could cause serious social consequences such as drunkenness and suicide.
We develop a framework, i.e., MAYA, to predict students' employment status while considering biases.
arXiv Detail & Related papers (2019-12-27T07:30:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.