Reviewer assignment problem: A scoping review
- URL: http://arxiv.org/abs/2305.07887v1
- Date: Sat, 13 May 2023 10:13:43 GMT
- Title: Reviewer assignment problem: A scoping review
- Authors: Jelena Jovanovic (1) and Ebrahim Bagheri (2) ((1) University of
Belgrade, Serbia, (2) Toronto Metropolitan University, Canada)
- Abstract summary: The quality of peer review depends on the ability to recruit adequate reviewers for submitted papers.
Finding such reviewers is an increasingly difficult task due to several factors.
Solutions for automated association of papers with "well matching" reviewers have been the subject of research for thirty years now.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Peer review is an integral component of scientific research. The quality of
peer review, and consequently the published research, depends to a large extent
on the ability to recruit adequate reviewers for submitted papers. However,
finding such reviewers is an increasingly difficult task due to several
factors, such as the continuous increase both in the production of scientific
papers and the workload of scholars. To mitigate these challenges, solutions
for automated association of papers with "well matching" reviewers - the task
often referred to as reviewer assignment problem (RAP) - have been the subject
of research for thirty years now. Even though numerous solutions have been
suggested, to our knowledge, a recent systematic synthesis of the RAP-related
literature is missing. To fill this gap and support further RAP-related
research, in this paper, we present a scoping review of computational
approaches for addressing RAP. Following the latest methodological guidance for
scoping reviews, we have collected recent literature on RAP from three
databases (Scopus, Google Scholar, DBLP) and, after applying the eligibility
criteria, retained 26 studies for extracting and synthesising data on several
aspects of RAP research including: i) the overall framing of and approach to
RAP; ii) the criteria for reviewer selection; iii) the modelling of candidate
reviewers and submissions; iv) the computational methods for matching reviewers
and submissions; and v) the methods for evaluating the performance of the
proposed solutions. The paper summarises and discusses the findings for each of
the aforementioned aspects of RAP research and suggests future research
directions.
Related papers
- A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - What Factors Should Paper-Reviewer Assignments Rely On? Community
Perspectives on Issues and Ideals in Conference Peer-Review [20.8704278772718]
We present the results of the first survey of the NLP community.
We identify common issues and perspectives on what factors should be considered by paper-reviewer matching systems.
arXiv Detail & Related papers (2022-05-02T16:07:02Z) - Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL
Rolling Review and Beyond [58.71736531356398]
We present an in-depth discussion of peer reviewing data, outline the ethical and legal desiderata for peer reviewing data collection, and propose the first continuous, donation-based data collection workflow.
We report on the ongoing implementation of this workflow at the ACL Rolling Review and deliver the first insights obtained with the newly collected data.
arXiv Detail & Related papers (2022-01-27T11:02:43Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - A Comprehensive Attempt to Research Statement Generation [39.8491923428562]
We propose the research statement generation task which aims to summarize one's research achievements.
We construct an RSG dataset with 62 research statements and the corresponding 1,203 publications.
Our method outperforms all the baselines with better content coverage and coherence.
arXiv Detail & Related papers (2021-04-25T03:57:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.