You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring
Mechanism
- URL: http://arxiv.org/abs/2110.14802v1
- Date: Wed, 27 Oct 2021 22:11:29 GMT
- Title: You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring
Mechanism
- Authors: Weijie J. Su
- Abstract summary: Isotonic mechanism improves on imprecise raw scores by leveraging certain information that the owner is incentivized to provide.
It reports adjusted scores for the items by solving a convex optimization problem.
I prove that the adjusted scores provided by this owner-assisted mechanism are indeed significantly more accurate than the raw scores provided by the reviewers.
- Score: 17.006003864727408
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: I consider the setting where reviewers offer very noisy scores for a number
of items for the selection of high-quality ones (e.g., peer review of large
conference proceedings) whereas the owner of these items knows the true
underlying scores but prefers not to provide this information. To address this
withholding of information, in this paper, I introduce the \textit{Isotonic
Mechanism}, a simple and efficient approach to improving on the imprecise raw
scores by leveraging certain information that the owner is incentivized to
provide. This mechanism takes as input the ranking of the items from best to
worst provided by the owner, in addition to the raw scores provided by the
reviewers. It reports adjusted scores for the items by solving a convex
optimization problem. Under certain conditions, I show that the owner's optimal
strategy is to honestly report the true ranking of the items to her best
knowledge in order to maximize the expected utility. Moreover, I prove that the
adjusted scores provided by this owner-assisted mechanism are indeed
significantly more accurate than the raw scores provided by the reviewers. This
paper concludes with several extensions of the Isotonic Mechanism and some
refinements of the mechanism for practical considerations.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Analytical and Empirical Study of Herding Effects in Recommendation Systems [72.6693986712978]
We study how to manage product ratings via rating aggregation rules and shortlisted representative reviews.
We show that proper recency aware rating aggregation rules can improve the speed of convergence in Amazon and TripAdvisor.
arXiv Detail & Related papers (2024-08-20T14:29:23Z) - Eliciting Honest Information From Authors Using Sequential Review [13.424398627546788]
We propose a sequential review mechanism that can truthfully elicit the ranking information from authors.
The key idea is to review the papers of an author in a sequence based on the provided ranking and conditioning the review of the next paper on the review scores of the previous papers.
arXiv Detail & Related papers (2023-11-24T17:27:39Z) - The Isotonic Mechanism for Exponential Family Estimation [31.542906034919977]
In 2023, the International Conference on Machine Learning (ICML) required authors with multiple submissions to rank their submissions based on perceived quality.
In this paper, we aim to employ these author-specified rankings to enhance peer review in machine learning and artificial intelligence conferences.
This mechanism generates adjusted scores that closely align with the original scores while adhering to author-specified rankings.
arXiv Detail & Related papers (2023-04-21T17:59:08Z) - Tradeoffs in Preventing Manipulation in Paper Bidding for Reviewer
Assignment [89.38213318211731]
Despite the benefits of using bids, reliance on paper bidding can allow malicious reviewers to manipulate the paper assignment for unethical purposes.
Several different approaches to preventing this manipulation have been proposed and deployed.
In this paper, we enumerate certain desirable properties that algorithms for addressing bid manipulation should satisfy.
arXiv Detail & Related papers (2022-07-22T19:58:17Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - An Efficient and Accurate Rough Set for Feature Selection,
Classification and Knowledge Representation [89.5951484413208]
This paper present a strong data mining method based on rough set, which can realize feature selection, classification and knowledge representation at the same time.
We first find the ineffectiveness of rough set because of overfitting, especially in processing noise attribute, and propose a robust measurement for an attribute, called relative importance.
Experimental results on public benchmark data sets show that the proposed framework achieves higher accurcy than seven popular or the state-of-the-art feature selection methods.
arXiv Detail & Related papers (2021-12-29T12:45:49Z) - Peer Selection with Noisy Assessments [43.307040330622186]
We extend PeerNomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination.
We show analytically that a weighting scheme can improve the overall accuracy of the selection significantly.
arXiv Detail & Related papers (2021-07-21T14:47:11Z) - Explaining reputation assessments [6.87724532311602]
We propose an approach to explain the rationale behind assessments from quantitative reputation models.
Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models.
arXiv Detail & Related papers (2020-06-15T23:19:35Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.