Systematic Review of Approaches to Improve Peer Assessment at Scale
- URL: http://arxiv.org/abs/2001.10617v1
- Date: Mon, 27 Jan 2020 15:59:24 GMT
- Title: Systematic Review of Approaches to Improve Peer Assessment at Scale
- Authors: Manikandan Ravikiran
- Abstract summary: This review focuses on three facets of Peer Assessment (PA) namely Auto grading and Peer Assessment Tools (we shall look only on how peer reviews/auto-grading is carried), strategies to handle Rogue Reviews, Peer Review Improvement using Natural Language Processing.
- Score: 5.067828201066184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Peer Assessment is a task of analysis and commenting on student's writing by
peers, is core of all educational components both in campus and in MOOC's.
However, with the sheer scale of MOOC's & its inherent personalised open ended
learning, automatic grading and tools assisting grading at scale is highly
important. Previously we presented survey on tasks of post classification,
knowledge tracing and ended with brief review on Peer Assessment (PA), with
some initial problems. In this review we shall continue review on PA from
perspective of improving the review process itself. As such rest of this review
focus on three facets of PA namely Auto grading and Peer Assessment Tools (we
shall look only on how peer reviews/auto-grading is carried), strategies to
handle Rogue Reviews, Peer Review Improvement using Natural Language
Processing. The consolidated set of papers and resources so used are released
in https://github.com/manikandan-ravikiran/cs6460-Survey-2.
Related papers
- GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews [25.291384842659397]
We introduce sys, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews.
Unlike traditional consensus-based methods, sys extracts both common and unique opinions from the reviews.
arXiv Detail & Related papers (2024-06-11T15:27:01Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - OpinSummEval: Revisiting Automated Evaluation for Opinion Summarization [52.720711541731205]
We present OpinSummEval, a dataset comprising human judgments and outputs from 14 opinion summarization models.
Our findings indicate that metrics based on neural networks generally outperform non-neural ones.
arXiv Detail & Related papers (2023-10-27T13:09:54Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - Linking open-source code commits and MOOC grades to evaluate massive
online open peer review [0.0]
We link data from public code repositories on GitHub and course grades for a large massive-online open course to study the dynamics of massive scale peer review.
We find three distinct repeated peerreview submissions and use these to study how grades change in response to changes in code submissions.
Our exploration also leads to an important observation that massive scale peer-review scores are highly variable, increase, on average, with repeated submissions, and changes in scores are not closely tied to the code changes that form the basis for the re-s.
arXiv Detail & Related papers (2021-04-15T18:27:01Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Leveraging Peer Feedback to Improve Visualization Education [4.679788938455095]
We discuss the construction and application of peer review in a computer science visualization course.
We evaluate student projects, peer review text, and a post-course questionnaire from 3 semesters of mixed undergraduate and graduate courses.
arXiv Detail & Related papers (2020-01-12T21:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.