From Authors to Reviewers: Leveraging Rankings to Improve Peer Review
- URL: http://arxiv.org/abs/2510.21726v1
- Date: Fri, 26 Sep 2025 19:16:09 GMT
- Title: From Authors to Reviewers: Leveraging Rankings to Improve Peer Review
- Authors: Weichen Wang, Chengchun Shi,
- Abstract summary: The review quality of machine learning (ML) conferences has become a big concern in recent years.<n>We propose an approach that leverages ranking information from reviewers rather than authors.<n>Our results show that (i) ranking incorporating information from reviewers can significantly improve the evaluation of each paper's quality, often outperforming the use of ranking information from authors alone.
- Score: 10.541357028178831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is a discussion of the 2025 JASA discussion paper by Su et al. (2025). We would like to congratulate the authors on conducting a comprehensive and insightful empirical investigation of the 2023 ICML ranking data. The review quality of machine learning (ML) conferences has become a big concern in recent years, due to the rapidly growing number of submitted manuscripts. In this discussion, we propose an approach alternative to Su et al. (2025) that leverages ranking information from reviewers rather than authors. We simulate review data that closely mimics the 2023 ICML conference submissions. Our results show that (i) incorporating ranking information from reviewers can significantly improve the evaluation of each paper's quality, often outperforming the use of ranking information from authors alone; and (ii) combining ranking information from both reviewers and authors yields the most accurate evaluation of submitted papers in most scenarios.
Related papers
- Recommending Best Paper Awards for ML/AI Conferences via the Isotonic Mechanism [10.746401441903174]
We introduce an author-assisted mechanism to facilitate the selection of best paper awards.<n>Our method employs the Isotonic Mechanism for eliciting authors' assessments of their own submissions.<n>We prove that truthfulness holds even when the utility function is merely nondecreasing and additive.
arXiv Detail & Related papers (2026-01-21T18:30:42Z) - The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review [49.43514488610211]
Author-provided rankings could be leveraged to improve peer review processes at machine learning conferences.<n>We focus on the Isotonic Mechanism, which calibrates raw review scores using the author-provided rankings.<n>We propose several cautious, low-risk applications of the Isotonic Mechanism and author-provided rankings in peer review.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews [51.453135368388686]
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM)
Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level.
arXiv Detail & Related papers (2024-03-11T21:51:39Z) - Has the Machine Learning Review Process Become More Arbitrary as the
Field Has Grown? The NeurIPS 2021 Consistency Experiment [86.77085171670323]
We present a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.
We observe that the two committees disagree on their accept/reject recommendations for 23% of the papers and that, consistent with the results from 2014, approximately half of the list of accepted papers would change if the review process were randomly rerun.
arXiv Detail & Related papers (2023-06-05T21:26:12Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Matching Papers and Reviewers at Large Conferences [25.79501640609188]
This paper studies a novel reviewer-paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021)
This approach has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer-paper scores; (2) formulating and solving an optimization problem to find good reviewer-paper matchings; and (3) the introduction of a novel, two-phase reviewing process that shifted reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary.
arXiv Detail & Related papers (2022-02-24T18:13:43Z) - Investigating Crowdsourcing Protocols for Evaluating the Factual
Consistency of Summaries [59.27273928454995]
Current pre-trained models applied to summarization are prone to factual inconsistencies which misrepresent the source text or introduce extraneous information.
We create a crowdsourcing evaluation framework for factual consistency using the rating-based Likert scale and ranking-based Best-Worst Scaling protocols.
We find that ranking-based protocols offer a more reliable measure of summary quality across datasets, while the reliability of Likert ratings depends on the target dataset and the evaluation design.
arXiv Detail & Related papers (2021-09-19T19:05:00Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.