On the Detection of Reviewer-Author Collusion Rings From Paper Bidding
- URL: http://arxiv.org/abs/2402.07860v2
- Date: Sun, 10 Mar 2024 23:46:41 GMT
- Title: On the Detection of Reviewer-Author Collusion Rings From Paper Bidding
- Authors: Steven Jecmen, Nihar B. Shah, Fei Fang, Leman Akoglu
- Abstract summary: Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
- Score: 71.43634536456844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major threat to the peer-review systems of computer science conferences is
the existence of "collusion rings" between reviewers. In such collusion rings,
reviewers who have also submitted their own papers to the conference work
together to manipulate the conference's paper assignment, with the aim of being
assigned to review each other's papers. The most straightforward way that
colluding reviewers can manipulate the paper assignment is by indicating their
interest in each other's papers through strategic paper bidding. One potential
approach to solve this important problem would be to detect the colluding
reviewers from their manipulated bids, after which the conference can take
appropriate action. While prior work has developed effective techniques to
detect other kinds of fraud, no research has yet established that detecting
collusion rings is even possible. In this work, we tackle the question of
whether it is feasible to detect collusion rings from the paper bidding. To
answer this question, we conduct empirical analysis of two realistic conference
bidding datasets, including evaluations of existing algorithms for fraud
detection in other applications. We find that collusion rings can achieve
considerable success at manipulating the paper assignment while remaining
hidden from detection: for example, in one dataset, undetected colluders are
able to achieve assignment to up to 30% of the papers authored by other
colluders. In addition, when 10 colluders bid on all of each other's papers, no
detection algorithm outputs a group of reviewers with more than 31% overlap
with the true colluders. These results suggest that collusion cannot be
effectively detected from the bidding using popular existing tools,
demonstrating the need to develop more complex detection algorithms as well as
those that leverage additional metadata (e.g., reviewer-paper text-similarity
scores).
Related papers
- Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and
Baseline via Detection [12.244543468021938]
This paper introduces two types of detection tasks for adversarial documents.
A benchmark dataset is established to facilitate the investigation of adversarial ranking defense.
A comprehensive investigation of the performance of several detection baselines is conducted.
arXiv Detail & Related papers (2023-07-31T16:31:24Z) - Tradeoffs in Preventing Manipulation in Paper Bidding for Reviewer
Assignment [89.38213318211731]
Despite the benefits of using bids, reliance on paper bidding can allow malicious reviewers to manipulate the paper assignment for unethical purposes.
Several different approaches to preventing this manipulation have been proposed and deployed.
In this paper, we enumerate certain desirable properties that algorithms for addressing bid manipulation should satisfy.
arXiv Detail & Related papers (2022-07-22T19:58:17Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set
Methods [86.39044549664189]
Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.
This paper proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features rather than the presence of novelty.
The paper concludes with a discussion of whether familiarity detection is an inevitable consequence of representation learning.
arXiv Detail & Related papers (2022-03-04T18:32:58Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.