Blended PC Peer Review Model: Process and Reflection
- URL: http://arxiv.org/abs/2504.19105v3
- Date: Thu, 07 Aug 2025 04:28:17 GMT
- Title: Blended PC Peer Review Model: Process and Reflection
- Authors: Chakkrit Tantithamthavorn, Nicole Novielli, Ayushi Rastogi, Olga Baysal, Bram Adams,
- Abstract summary: The International Conference on Mining Software Repositories (MSR 2025) experimented with a Blended Program Committee (PC) peer review model for its Technical Track.<n>This paper presents the rationale, implementation, and reflections on the model, including empirical insights from a post-review author survey.<n>Our findings highlight the potential of a Blended PC to alleviate reviewer shortages, foster inclusivity, and sustain a high-quality peer review process.
- Score: 12.91610113966584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The academic peer review system is under increasing pressure due to a growing volume of submissions and a limited pool of available reviewers, resulting in delayed decisions and an uneven distribution of reviewing responsibilities. Building upon the International Conference on Mining Software Repositories (MSR) community's earlier experience with a Shadow PC (2021 and 2022) and Junior PC (2023 and 2024), MSR 2025 experimented with a Blended Program Committee (PC) peer review model for its Technical Track. This new model pairs up one Junior PC member with two regular PC members as part of the core review team of a given paper, instead of adding them as an extra reviewer. This paper presents the rationale, implementation, and reflections on the model, including empirical insights from a post-review author survey evaluating the quality and usefulness of reviews. Our findings highlight the potential of a Blended PC to alleviate reviewer shortages, foster inclusivity, and sustain a high-quality peer review process. We offer lessons learned and recommendations to guide future adoption and refinement of the model.
Related papers
- Can LLM feedback enhance review quality? A randomized study of 20K reviews at ICLR 2025 [115.86204862475864]
Review Feedback Agent provides automated feedback on vague comments, content misunderstandings, and unprofessional remarks to reviewers.<n>It was implemented at ICLR 2025 as a large randomized control study.<n> 27% of reviewers who received feedback updated their reviews, and over 12,000 feedback suggestions from the agent were incorporated by those reviewers.
arXiv Detail & Related papers (2025-04-13T22:01:25Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.<n>We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - Generative Adversarial Reviews: When LLMs Become the Critic [1.2430809884830318]
We introduce Generative Agent Reviewers (GAR), leveraging LLM-empowered agents to simulate faithful peer reviewers.<n>Central to this approach is a graph-based representation of manuscripts, condensing content and logically organizing information.<n>Our experiments demonstrate that GAR performs comparably to human reviewers in providing detailed feedback and predicting paper outcomes.
arXiv Detail & Related papers (2024-12-09T06:58:17Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.<n>This paper presents a thorough analysis of these literature reviews within the PAMI field.<n>We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - Has the Machine Learning Review Process Become More Arbitrary as the
Field Has Grown? The NeurIPS 2021 Consistency Experiment [86.77085171670323]
We present a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.
We observe that the two committees disagree on their accept/reject recommendations for 23% of the papers and that, consistent with the results from 2014, approximately half of the list of accepted papers would change if the review process were randomly rerun.
arXiv Detail & Related papers (2023-06-05T21:26:12Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Generating Summaries for Scientific Paper Review [29.12631698162247]
The increase of submissions for top venues in machine learning and NLP has caused a problem of excessive burden on reviewers.
An automatic system for assisting with the reviewing process could be a solution for ameliorating the problem.
In this paper, we explore automatic review summary generation for scientific papers.
arXiv Detail & Related papers (2021-09-28T21:43:53Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - A Novice-Reviewer Experiment to Address Scarcity of Qualified Reviewers
in Large Conferences [35.24369486197371]
A surge in the number of submissions received by leading AI conferences has challenged the sustainability of the review process.
We consider the problem of reviewer recruiting with a focus on the scarcity of qualified reviewers in large conferences.
In conjunction with ICML 2020 -- a large, top-tier machine learning conference -- we recruit a small set of reviewers through our procedure and compare their performance with the general population of ICML reviewers.
arXiv Detail & Related papers (2020-11-30T17:48:55Z) - State-of-Art-Reviewing: A Radical Proposal to Improve Scientific
Publication [19.10668029301668]
State-Of-the-Art Review (SOAR) is a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review.
At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation.
arXiv Detail & Related papers (2020-03-31T17:58:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.