Eliciting Honest Information From Authors Using Sequential Review
- URL: http://arxiv.org/abs/2311.14619v1
- Date: Fri, 24 Nov 2023 17:27:39 GMT
- Title: Eliciting Honest Information From Authors Using Sequential Review
- Authors: Yichi Zhang, Grant Schoenebeck, Weijie Su
- Abstract summary: We propose a sequential review mechanism that can truthfully elicit the ranking information from authors.
The key idea is to review the papers of an author in a sequence based on the provided ranking and conditioning the review of the next paper on the review scores of the previous papers.
- Score: 13.424398627546788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the setting of conference peer review, the conference aims to accept
high-quality papers and reject low-quality papers based on noisy review scores.
A recent work proposes the isotonic mechanism, which can elicit the ranking of
paper qualities from an author with multiple submissions to help improve the
conference's decisions. However, the isotonic mechanism relies on the
assumption that the author's utility is both an increasing and a convex
function with respect to the review score, which is often violated in peer
review settings (e.g.~when authors aim to maximize the number of accepted
papers). In this paper, we propose a sequential review mechanism that can
truthfully elicit the ranking information from authors while only assuming the
agent's utility is increasing with respect to the true quality of her accepted
papers. The key idea is to review the papers of an author in a sequence based
on the provided ranking and conditioning the review of the next paper on the
review scores of the previous papers. Advantages of the sequential review
mechanism include 1) eliciting truthful ranking information in a more realistic
setting than prior work; 2) improving the quality of accepted papers, reducing
the reviewing workload and increasing the average quality of papers being
reviewed; 3) incentivizing authors to write fewer papers of higher quality.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Auctions and Peer Prediction for Academic Peer Review [11.413240461538589]
We propose a novel peer prediction mechanism (H-DIPP) building on recent work in the information elicitation literature.
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
arXiv Detail & Related papers (2021-08-27T23:47:15Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.