Ranking Scientific Papers Using Preference Learning
- URL: http://arxiv.org/abs/2109.01190v1
- Date: Thu, 2 Sep 2021 19:41:47 GMT
- Title: Ranking Scientific Papers Using Preference Learning
- Authors: Nils Dycke, Edwin Simpson, Ilia Kuznetsov, Iryna Gurevych
- Abstract summary: We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
- Score: 48.78161994501516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer review is the main quality control mechanism in academia. Quality of
scientific work has many dimensions; coupled with the subjective nature of the
reviewing task, this makes final decision making based on the reviews and
scores therein very difficult and time-consuming. To assist with this important
task, we cast it as a paper ranking problem based on peer review texts and
reviewer scores. We introduce a novel, multi-faceted generic evaluation
framework for making final decisions based on peer reviews that takes into
account effectiveness, efficiency and fairness of the evaluated system. We
propose a novel approach to paper ranking based on Gaussian Process Preference
Learning (GPPL) and evaluate it on peer review data from the ACL-2018
conference. Our experiments demonstrate the superiority of our GPPL-based
approach over prior work, while highlighting the importance of using both texts
and review scores for paper ranking during peer review aggregation.
Related papers
- Deep Transfer Learning Based Peer Review Aggregation and Meta-review Generation for Scientific Articles [2.0778556166772986]
We address two peer review aggregation challenges: paper acceptance decision-making and meta-review generation.
Firstly, we propose to automate the process of acceptance decision prediction by applying traditional machine learning algorithms.
For the meta-review generation, we propose a transfer learning model based on the T5 model.
arXiv Detail & Related papers (2024-10-05T15:40:37Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews [25.291384842659397]
We introduce sys, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews.
Unlike traditional consensus-based methods, sys extracts both common and unique opinions from the reviews.
arXiv Detail & Related papers (2024-06-11T15:27:01Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Eliciting Honest Information From Authors Using Sequential Review [13.424398627546788]
We propose a sequential review mechanism that can truthfully elicit the ranking information from authors.
The key idea is to review the papers of an author in a sequence based on the provided ranking and conditioning the review of the next paper on the review scores of the previous papers.
arXiv Detail & Related papers (2023-11-24T17:27:39Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z) - Systematic Review of Approaches to Improve Peer Assessment at Scale [5.067828201066184]
This review focuses on three facets of Peer Assessment (PA) namely Auto grading and Peer Assessment Tools (we shall look only on how peer reviews/auto-grading is carried), strategies to handle Rogue Reviews, Peer Review Improvement using Natural Language Processing.
arXiv Detail & Related papers (2020-01-27T15:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.