De-anonymization of authors through arXiv submissions during
double-blind review
- URL: http://arxiv.org/abs/2007.00177v1
- Date: Wed, 1 Jul 2020 01:40:06 GMT
- Title: De-anonymization of authors through arXiv submissions during
double-blind review
- Authors: Homanga Bharadhwaj, Dylan Turpin, Animesh Garg, Ashton Anderson
- Abstract summary: We investigate the effects of releasing arXiv preprints of papers undergoing a double-blind review process.
We find statistically significant evidence of positive correlation between percentage acceptance and papers with high reputation released on arXiv.
- Score: 33.15282901539867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the effects of releasing arXiv preprints of
papers that are undergoing a double-blind review process. In particular, we ask
the following research question: What is the relation between de-anonymization
of authors through arXiv preprints and acceptance of a research paper at a
(nominally) double-blind venue? Under two conditions: papers that are released
on arXiv before the review phase and papers that are not, we examine the
correlation between the reputation of their authors with the review scores and
acceptance decisions. By analyzing a dataset of ICLR 2020 and ICLR 2019
submissions (n=5050), we find statistically significant evidence of positive
correlation between percentage acceptance and papers with high reputation
released on arXiv. In order to understand this observed association better, we
perform additional analyses based on self-specified confidence scores of
reviewers and observe that less confident reviewers are more likely to assign
high review scores to papers with well known authors and low review scores to
papers with less known authors, where reputation is quantified in terms of
number of Google Scholar citations. We emphasize upfront that our results are
purely correlational and we neither can nor intend to make any causal claims. A
blog post accompanying the paper and our scraping code will be linked in the
project website https://sites.google.com/view/deanon-arxiv/home
Related papers
- CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - Estimating the Causal Effect of Early ArXiving on Paper Acceptance [56.538813945721685]
We estimate the effect of arXiving a paper before the reviewing period (early arXiving) on its acceptance to the conference.
Our results suggest that early arXiving may have a small effect on a paper's chances of acceptance.
arXiv Detail & Related papers (2023-06-24T07:45:38Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Cracking Double-Blind Review: Authorship Attribution with Deep Learning [43.483063713471935]
We propose a transformer-based, neural-network architecture to attribute an anonymous manuscript to an author.
We leverage all research papers publicly available on arXiv amounting to over 2 million manuscripts.
Our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly.
arXiv Detail & Related papers (2022-11-14T15:50:24Z) - Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS
Experiment [26.30237757653724]
We revisit the 2014 NeurIPS experiment that examined inconsistency in conference peer review.
We find that for emphaccepted papers, there is no correlation between quality scores and impact of the paper.
arXiv Detail & Related papers (2021-09-20T18:06:22Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Does double-blind peer-review reduce bias? Evidence from a top computer
science conference [2.642698101441705]
We analyze the effects of double-blind peer review on prestige bias by analyzing the peer review files of 5027 papers submitted to the International Conference on Learning Representations.
We find that after switching to double-blind review, the scores given to the most prestigious authors significantly decreased.
We show that double-blind peer review may have improved the quality of the selections by limiting other (non-author-prestige) biases.
arXiv Detail & Related papers (2021-01-07T18:59:26Z) - ArXiving Before Submission Helps Everyone [38.09600429721343]
We analyze the pros and cons of arXiving papers.
We see no reasons why anyone but the authors should decide whether to arXiv or not.
arXiv Detail & Related papers (2020-10-11T22:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.