Using 3D printed badges to improve student performance and reduce
dropout rates in STEM higher education
- URL: http://arxiv.org/abs/2303.08939v2
- Date: Fri, 19 May 2023 16:05:46 GMT
- Title: Using 3D printed badges to improve student performance and reduce
dropout rates in STEM higher education
- Authors: Ra\'ul Lara-Cabrera and Fernando Ortega and Edgar Talavera and Daniel
L\'opez-Fern\'andez
- Abstract summary: This contribution hypothesizes that the use of badges, both physical and virtual, improves student performance and reduces dropout rates.
The results show that the usage of badges improves student performance and reduces dropout rates.
- Score: 59.58137104470497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Students' perception of excessive difficulty in STEM degrees lowers their
motivation and therefore affects their performance. According to prior
research, the use of gamification techniques promote engagement, motivation and
fun when learning. Badges, which are a distinction that is given as a reward to
students, are a well-known gamification tool. This contribution hypothesizes
that the use of badges, both physical and virtual, improves student performance
and reduces dropout rates. To verify that hypothesis, a case study involving 99
students enrolled in a Databases course of computer engineering degrees was
conducted. The results show that the usage of badges improves student
performance and reduces dropout rates. However, negligible differences were
found between the use of different kind of badges.
Related papers
- Logit Standardization in Knowledge Distillation [83.31794439964033]
The assumption of a shared temperature between teacher and student implies a mandatory exact match between their logits in terms of logit range and variance.
We propose setting the temperature as the weighted standard deviation of logit and performing a plug-and-play Z-score pre-process of logit standardization.
Our pre-process enables student to focus on essential logit relations from teacher rather than requiring a magnitude match, and can improve the performance of existing logit-based distillation methods.
arXiv Detail & Related papers (2024-03-03T07:54:03Z) - What is Lost in Knowledge Distillation? [4.1205832766381985]
Deep neural networks (DNNs) have improved NLP tasks significantly, but training and maintaining such networks could be costly.
Model compression techniques, such as, knowledge distillation (KD), have been proposed to address the issue.
Our work investigates how a distilled student model differs from its teacher, if the distillation process causes any information losses, and if the loss follows a specific pattern.
arXiv Detail & Related papers (2023-11-07T17:13:40Z) - Improving Students With Rubric-Based Self-Assessment and Oral Feedback [2.808134646037882]
rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes.
This paper evaluates the effect of rubrics and oral feedback on student learning outcomes.
arXiv Detail & Related papers (2023-07-24T14:48:28Z) - Teacher's pet: understanding and mitigating biases in distillation [61.44867470297283]
Several works have shown that distillation significantly boosts the student's overall performance.
However, are these gains uniform across all data subgroups?
We show that distillation can harm performance on certain subgroups.
We present techniques which soften the teacher influence for subgroups where it is less reliable.
arXiv Detail & Related papers (2021-06-19T13:06:25Z) - Assessing Attendance by Peer Information [1.0294998767664172]
We propose a novel method called Relative Attendance Index (RAI) to measure attendance rates.
While traditional attendance focuses on the record of a single person or course, relative attendance emphasizes peer attendance information of relevant individuals or courses.
Experimental results on real-life data show that RAI can indeed better reflect student engagement.
arXiv Detail & Related papers (2021-06-06T15:00:40Z) - Distilling Knowledge via Knowledge Review [69.15050871776552]
We study the factor of connection path cross levels between teacher and student networks, and reveal its great importance.
For the first time in knowledge distillation, cross-stage connection paths are proposed.
Our finally designed nested and compact framework requires negligible overhead, and outperforms other methods on a variety of tasks.
arXiv Detail & Related papers (2021-04-19T04:36:24Z) - Fixing the Teacher-Student Knowledge Discrepancy in Distillation [72.4354883997316]
We propose a novel student-dependent distillation method, knowledge consistent distillation, which makes teacher's knowledge more consistent with the student.
Our method is very flexible that can be easily combined with other state-of-the-art approaches.
arXiv Detail & Related papers (2021-03-31T06:52:20Z) - Reducing the Teacher-Student Gap via Spherical Knowledge Disitllation [67.75526580926149]
Knowledge distillation aims at obtaining a compact and effective model by learning the mapping function from a much larger one.
We investigate the capacity gap problem by study the gap of confidence between teacher and student.
We find that the magnitude of confidence is not necessary for knowledge distillation and could harm the student performance if the student are forced to learn confidence.
arXiv Detail & Related papers (2020-10-15T03:03:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.