Students' Engagement in Anonymous Peer Review: Using the Open-Source
Sakai Platform
- URL: http://arxiv.org/abs/2108.09955v1
- Date: Mon, 23 Aug 2021 05:56:26 GMT
- Title: Students' Engagement in Anonymous Peer Review: Using the Open-Source
Sakai Platform
- Authors: Fazlyn Petersen and Bradley Groenewald
- Abstract summary: The research used self-determination theory as a theoretical basis.
The achievement of perceived autonomy is supported as an anonymous peer review helped students to empower themselves.
Perceived competence was also achieved as the anonymous peer review improved the quality of work submitted and the development of workplace skills.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is a need to provide quality education without discrimination or
prejudice to all students. However, there are challenges in implementing
quality education in large classes, especially during remote learning.
Literature indicates that providing lecturer feedback can become a tedious
task, especially in large classes. Literature states that involving students in
the peer review process can improve the quality of their submissions. This
research used a case study and thematic analysis. Qualitative data were
collected from 179 third-year Information Systems students who used the
Opensource Sakai Platform. Students reviewed another student's report, without
knowing their identity. The research used self-determination theory as a
theoretical basis. The achievement of perceived autonomy is supported as an
anonymous peer review helped students to empower themselves. Perceived
competence was also achieved as the anonymous peer review improved the quality
of work submitted and the development of workplace skills. Perceived
relatedness is supported as students indicated that the anonymous peer review
allowed them to learn from their peers. It also improved their understanding
and the ability to see errors in their work. Despite the negative aspects
identified using the Sakai platform, it may provide a viable alternative for
providing feedback remotely, especially during the Covid-19 pandemic.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Enhancing Students' Learning Process Through Self-Generated Tests [0.0]
This paper describes an educational experiment aimed at the promotion of students' autonomous learning.
The main idea is to make the student feel part of the evaluation process by including students' questions in the evaluation exams.
Questions uploaded by students are visible to every enrolled student as well as to each involved teacher.
arXiv Detail & Related papers (2024-03-21T09:49:33Z) - Faithful Knowledge Distillation [75.59907631395849]
We focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples?
These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting.
arXiv Detail & Related papers (2023-06-07T13:41:55Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Plagiarism deterrence for introductory programming [11.612194979331179]
A class-wide statistical characterization can be clearly shared with students via an intuitive new p-value.
A pairwise, compression-based similarity detection algorithm captures relationships between assignments more accurately.
An unbiased scoring system aids students and the instructor in understanding true independence of effort.
arXiv Detail & Related papers (2022-06-06T18:47:25Z) - A literature survey on student feedback assessment tools and their usage
in sentiment analysis [0.0]
We evaluate the effectiveness of various in-class feedback assessment methods such as Kahoot!, Mentimeter, Padlet, and polling.
We propose a sentiment analysis model for extracting the explicit suggestions from the students' qualitative feedback comments.
arXiv Detail & Related papers (2021-09-09T06:56:30Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Are Top School Students More Critical of Their Professors? Mining
Comments on RateMyProfessor.com [83.2634062100579]
Student reviews and comments on RateMyProfessor.com reflect realistic learning experiences of students.
Our study proves that student reviews and comments contain crucial information and can serve as essential references for enrollment in courses and universities.
arXiv Detail & Related papers (2021-01-23T20:01:36Z) - Understanding Peer Review of Software Engineering Papers [5.744593856232663]
We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers.
The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty.
Authors should make the contribution of the work very clear in their paper.
arXiv Detail & Related papers (2020-09-02T17:31:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.