DecentPeeR: A Self-Incentivised & Inclusive Decentralized Peer Review System
- URL: http://arxiv.org/abs/2411.08450v1
- Date: Wed, 13 Nov 2024 09:13:16 GMT
- Title: DecentPeeR: A Self-Incentivised & Inclusive Decentralized Peer Review System
- Authors: Johannes Gruendler, Darya Melnyk, Arash Pourdamghani, Stefan Schmid,
- Abstract summary: We show that our system, DecentPeeR, incentivizes reviewers to behave according to the rules, i.e., it has a unique Nash equilibrium in which virtuous behavior is rewarded.
- Score: 11.477670199123335
- License:
- Abstract: Peer review, as a widely used practice to ensure the quality and integrity of publications, lacks a well-defined and common mechanism to self-incentivize virtuous behavior across all the conferences and journals. This is because information about reviewer efforts and author feedback typically remains local to a single venue, while the same group of authors and reviewers participate in the publication process across many venues. Previous attempts to incentivize the reviewing process assume that the quality of reviews and papers authored correlate for the same person, or they assume that the reviewers can receive physical rewards for their work. In this paper, we aim to keep track of reviewing and authoring efforts by users (who review and author) across different venues while ensuring self-incentivization. We show that our system, DecentPeeR, incentivizes reviewers to behave according to the rules, i.e., it has a unique Nash equilibrium in which virtuous behavior is rewarded.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews [25.291384842659397]
We introduce sys, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews.
Unlike traditional consensus-based methods, sys extracts both common and unique opinions from the reviews.
arXiv Detail & Related papers (2024-06-11T15:27:01Z) - Eliciting Honest Information From Authors Using Sequential Review [13.424398627546788]
We propose a sequential review mechanism that can truthfully elicit the ranking information from authors.
The key idea is to review the papers of an author in a sequence based on the provided ranking and conditioning the review of the next paper on the review scores of the previous papers.
arXiv Detail & Related papers (2023-11-24T17:27:39Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Auctions and Peer Prediction for Academic Peer Review [11.413240461538589]
We propose a novel peer prediction mechanism (H-DIPP) building on recent work in the information elicitation literature.
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
arXiv Detail & Related papers (2021-08-27T23:47:15Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Prior and Prejudice: The Novice Reviewers' Bias against Resubmissions in
Conference Peer Review [35.24369486197371]
Modern machine learning and computer science conferences are experiencing a surge in the number of submissions that challenges the quality of peer review.
Several conferences have started encouraging or even requiring authors to declare the previous submission history of their papers.
We investigate whether reviewers exhibit a bias caused by the knowledge that the submission under review was previously rejected at a similar venue.
arXiv Detail & Related papers (2020-11-30T09:35:37Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.