Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and
Conference Experiment Design
- URL: http://arxiv.org/abs/2108.06371v1
- Date: Fri, 13 Aug 2021 19:29:41 GMT
- Title: Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and
Conference Experiment Design
- Authors: Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer,
Nihar B. Shah
- Abstract summary: We consider the question: how should reviewers be divided between phases or conditions in order to maximize total assignment similarity?
We empirically show that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
- Score: 76.40919326501512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many scientific conferences employ a two-phase paper review process, where
some papers are assigned additional reviewers after the initial reviews are
submitted. Many conferences also design and run experiments on their paper
review process, where some papers are assigned reviewers who provide reviews
under an experimental condition. In this paper, we consider the question: how
should reviewers be divided between phases or conditions in order to maximize
total assignment similarity? We make several contributions towards answering
this question. First, we prove that when the set of papers requiring additional
review is unknown, a simplified variant of this problem is NP-hard. Second, we
empirically show that across several datasets pertaining to real conference
data, dividing reviewers between phases/conditions uniformly at random allows
an assignment that is nearly as good as the oracle optimal assignment. This
uniformly random choice is practical for both the two-phase and conference
experiment design settings. Third, we provide explanations of this phenomenon
by providing theoretical bounds on the suboptimality of this random strategy
under certain natural conditions. From these easily-interpretable conditions,
we provide actionable insights to conference program chairs about whether a
random reviewer split is suitable for their conference.
Related papers
- Time to Stop and Think: What kind of research do we want to do? [1.74048653626208]
In this paper, we focus on the field of metaheuristic optimization, since it is our main field of work.
Our main goal is to sew the seed of sincere critical assessment of our work, sparking a reflection process both at the individual and the community level.
All the statements included in this document are personal views and opinions, which can be shared by others or not.
arXiv Detail & Related papers (2024-02-13T08:53:57Z) - Incremental Extractive Opinion Summarization Using Cover Trees [81.59625423421355]
In online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically.
In this work, we study the task of extractive opinion summarization in an incremental setting.
We present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting.
arXiv Detail & Related papers (2024-01-16T02:00:17Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.