How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions?
- URL: http://arxiv.org/abs/2211.12966v1
- Date: Tue, 22 Nov 2022 15:59:30 GMT
- Title: How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions?
- Authors: Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin,
Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daum\'e III, Emma
Pierson, and Nihar B. Shah
- Abstract summary: Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
- Score: 87.00095008723181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How do author perceptions match up to the outcomes of the peer-review process
and perceptions of others? In a top-tier computer science conference (NeurIPS
2021) with more than 23,000 submitting authors and 9,000 submitted papers, we
survey the authors on three questions: (i) their predicted probability of
acceptance for each of their papers, (ii) their perceived ranking of their own
papers based on scientific contribution, and (iii) the change in their
perception about their own papers after seeing the reviews. The salient results
are: (1) Authors have roughly a three-fold overestimate of the acceptance
probability of their papers: The median prediction is 70% for an approximately
25% acceptance rate. (2) Female authors exhibit a marginally higher
(statistically significant) miscalibration than male authors; predictions of
authors invited to serve as meta-reviewers or reviewers are similarly
calibrated, but better than authors who were not invited to review. (3)
Authors' relative ranking of scientific contribution of two submissions they
made generally agree (93%) with their predicted acceptance probabilities, but
there is a notable 7% responses where authors think their better paper will
face a worse outcome. (4) The author-provided rankings disagreed with the
peer-review decisions about a third of the time; when co-authors ranked their
jointly authored papers, co-authors disagreed at a similar rate -- about a
third of the time. (5) At least 30% of respondents of both accepted and
rejected papers said that their perception of their own paper improved after
the review process. The stakeholders in peer review should take these findings
into account in setting their expectations from peer review.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - My part is bigger than yours -- assessment within a group of peers [0.0]
A project (e.g., writing a collaborative research paper) is often a group effort. At the end, each contributor identifies their contribution, often verbally.
It leads to the question of what (percentage) share in the creation of the paper is due to individual authors.
In this paper, we present simple models that allow aggregation of experts' views, linking the priority of his preference directly to the assessment made by other experts.
arXiv Detail & Related papers (2024-07-01T22:54:51Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - Eliciting Honest Information From Authors Using Sequential Review [13.424398627546788]
We propose a sequential review mechanism that can truthfully elicit the ranking information from authors.
The key idea is to review the papers of an author in a sequence based on the provided ranking and conditioning the review of the next paper on the review scores of the previous papers.
arXiv Detail & Related papers (2023-11-24T17:27:39Z) - Estimating the Causal Effect of Early ArXiving on Paper Acceptance [56.538813945721685]
We estimate the effect of arXiving a paper before the reviewing period (early arXiving) on its acceptance to the conference.
Our results suggest that early arXiving may have a small effect on a paper's chances of acceptance.
arXiv Detail & Related papers (2023-06-24T07:45:38Z) - Has the Machine Learning Review Process Become More Arbitrary as the
Field Has Grown? The NeurIPS 2021 Consistency Experiment [86.77085171670323]
We present a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.
We observe that the two committees disagree on their accept/reject recommendations for 23% of the papers and that, consistent with the results from 2014, approximately half of the list of accepted papers would change if the review process were randomly rerun.
arXiv Detail & Related papers (2023-06-05T21:26:12Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Does double-blind peer-review reduce bias? Evidence from a top computer
science conference [2.642698101441705]
We analyze the effects of double-blind peer review on prestige bias by analyzing the peer review files of 5027 papers submitted to the International Conference on Learning Representations.
We find that after switching to double-blind review, the scores given to the most prestigious authors significantly decreased.
We show that double-blind peer review may have improved the quality of the selections by limiting other (non-author-prestige) biases.
arXiv Detail & Related papers (2021-01-07T18:59:26Z) - Characterising authors on the extent of their paper acceptance: A case
study of the Journal of High Energy Physics [4.402336973466853]
We investigate the profile and peer review text of authors whose papers almost always get accepted at a venue.
Authors with high acceptance rate are likely to have a high number of citations, high $h$-index, higher number of collaborators etc.
arXiv Detail & Related papers (2020-06-12T03:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.