A Dataset on Malicious Paper Bidding in Peer Review
- URL: http://arxiv.org/abs/2207.02303v1
- Date: Fri, 24 Jun 2022 20:23:33 GMT
- Title: A Dataset on Malicious Paper Bidding in Peer Review
- Authors: Steven Jecmen, Minji Yoon, Vincent Conitzer, Nihar B. Shah, Fei Fang
- Abstract summary: Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
- Score: 84.68308372858755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In conference peer review, reviewers are often asked to provide "bids" on
each submitted paper that express their interest in reviewing that paper. A
paper assignment algorithm then uses these bids (along with other data) to
compute a high-quality assignment of reviewers to papers. However, this process
has been exploited by malicious reviewers who strategically bid in order to
unethically manipulate the paper assignment, crucially undermining the peer
review process. For example, these reviewers may aim to get assigned to a
friend's paper as part of a quid-pro-quo deal. A critical impediment towards
creating and evaluating methods to mitigate this issue is the lack of any
publicly-available data on malicious paper bidding. In this work, we collect
and publicly release a novel dataset to fill this gap, collected from a mock
conference activity where participants were instructed to bid either honestly
or maliciously. We further provide a descriptive analysis of the bidding
behavior, including our categorization of different strategies employed by
participants. Finally, we evaluate the ability of each strategy to manipulate
the assignment, and also evaluate the performance of some simple algorithms
meant to detect malicious bidding. The performance of these detection
algorithms can be taken as a baseline for future research on detecting
malicious bidding.
Related papers
- On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - Tradeoffs in Preventing Manipulation in Paper Bidding for Reviewer
Assignment [89.38213318211731]
Despite the benefits of using bids, reliance on paper bidding can allow malicious reviewers to manipulate the paper assignment for unethical purposes.
Several different approaches to preventing this manipulation have been proposed and deployed.
In this paper, we enumerate certain desirable properties that algorithms for addressing bid manipulation should satisfy.
arXiv Detail & Related papers (2022-07-22T19:58:17Z) - The Price of Strategyproofing Peer Assessment [30.51994705981846]
Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment.
Since an individual's own work is in competition with the submissions they are evaluating, they may provide dishonest evaluations to increase the relative standing of their own submission.
This issue is typically addressed by partitioning the individuals and assigning them to evaluate the work of only those from different subsets.
arXiv Detail & Related papers (2022-01-25T21:16:33Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment [61.24399136715106]
We consider the issue of strategic behaviour in various peer-assessment tasks, including peer grading of exams or homeworks and peer review in hiring or promotions.
Our focus is on designing methods for detection of such manipulations.
Specifically, we consider a setting in which agents evaluate a subset of their peers and output rankings that are later aggregated to form a final ordering.
arXiv Detail & Related papers (2020-10-08T15:08:40Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.