Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments
- URL: http://arxiv.org/abs/2006.16437v2
- Date: Fri, 23 Oct 2020 20:08:50 GMT
- Title: Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments
- Authors: Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar B. Shah, Vincent
Conitzer, Fei Fang
- Abstract summary: Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
- Score: 96.114824979298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider three important challenges in conference peer review: (i)
reviewers maliciously attempting to get assigned to certain papers to provide
positive reviews, possibly as part of quid-pro-quo arrangements with the
authors; (ii) "torpedo reviewing," where reviewers deliberately attempt to get
assigned to certain papers that they dislike in order to reject them; (iii)
reviewer de-anonymization on release of the similarities and the
reviewer-assignment code. On the conceptual front, we identify connections
between these three problems and present a framework that brings all these
challenges under a common umbrella. We then present a (randomized) algorithm
for reviewer assignment that can optimally solve the reviewer-assignment
problem under any given constraints on the probability of assignment for any
reviewer-paper pair. We further consider the problem of restricting the joint
probability that certain suspect pairs of reviewers are assigned to certain
papers, and show that this problem is NP-hard for arbitrary constraints on
these joint probabilities but efficiently solvable for a practical special
case. Finally, we experimentally evaluate our algorithms on datasets from past
conferences, where we observe that they can limit the chance that any malicious
reviewer gets assigned to their desired paper to 50% while producing
assignments with over 90% of the total optimal similarity. Our algorithms still
achieve this similarity while also preventing reviewers with close associations
from being assigned to the same paper.
Related papers
- On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - No Agreement Without Loss: Learning and Social Choice in Peer Review [0.0]
It may be assumed that each reviewer has her own mapping from the set of features to a recommendation.
This introduces an element of arbitrariness known as commensuration bias.
Noothigattu, Shah and Procaccia proposed to aggregate reviewer's mapping by minimizing certain loss functions.
arXiv Detail & Related papers (2022-11-03T21:03:23Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - The Dichotomous Affiliate Stable Matching Problem: Approval-Based
Matching with Applicant-Employer Relations [27.388757379210034]
We introduce the Dichotomous Affiliate Stable Matching (DASM) Problem, where agents' preferences indicate dichotomous acceptance or rejection of another agent in the marketplace.
Our results are threefold: (1) we use a human study to show that real-world matching rankings follow our assumed valuation function; (2) we prove that there always exists a stable solution by providing an efficient, easily-implementable algorithm that finds such a solution; and (3) we experimentally validate the efficiency of our algorithm versus a linear-programming-based approach.
arXiv Detail & Related papers (2022-02-22T18:56:21Z) - Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and
Conference Experiment Design [76.40919326501512]
We consider the question: how should reviewers be divided between phases or conditions in order to maximize total assignment similarity?
We empirically show that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
arXiv Detail & Related papers (2021-08-13T19:29:41Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.