Estimating the Causal Effect of Early ArXiving on Paper Acceptance
- URL: http://arxiv.org/abs/2306.13891v2
- Date: Tue, 20 Feb 2024 06:37:12 GMT
- Title: Estimating the Causal Effect of Early ArXiving on Paper Acceptance
- Authors: Yanai Elazar, Jiayao Zhang, David Wadden, Bo Zhang, Noah A. Smith
- Abstract summary: We estimate the effect of arXiving a paper before the reviewing period (early arXiving) on its acceptance to the conference.
Our results suggest that early arXiving may have a small effect on a paper's chances of acceptance.
- Score: 56.538813945721685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What is the effect of releasing a preprint of a paper before it is submitted
for peer review? No randomized controlled trial has been conducted, so we turn
to observational data to answer this question. We use data from the ICLR
conference (2018--2022) and apply methods from causal inference to estimate the
effect of arXiving a paper before the reviewing period (early arXiving) on its
acceptance to the conference. Adjusting for confounders such as topic, authors,
and quality, we may estimate the causal effect. However, since quality is a
challenging construct to estimate, we use the negative outcome control method,
using paper citation count as a control variable to debias the quality
confounding effect. Our results suggest that early arXiving may have a small
effect on a paper's chances of acceptance. However, this effect (when existing)
does not differ significantly across different groups of authors, as grouped by
author citation count and institute rank. This suggests that early arXiving
does not provide an advantage to any particular group.
Related papers
- CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References [3.607567777043649]
This study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not.
By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future.
arXiv Detail & Related papers (2023-10-27T19:51:44Z) - Too Good To Be True: performance overestimation in (re)current practices
for Human Activity Recognition [49.1574468325115]
sliding windows for data segmentation followed by standard random k-fold cross validation produce biased results.
It is important to raise awareness in the scientific community about this problem, whose negative effects are being overlooked.
Several experiments with different types of datasets and different types of classification models allow us to exhibit the problem and show it persists independently of the method or dataset.
arXiv Detail & Related papers (2023-10-18T13:24:05Z) - Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS
Experiment [26.30237757653724]
We revisit the 2014 NeurIPS experiment that examined inconsistency in conference peer review.
We find that for emphaccepted papers, there is no correlation between quality scores and impact of the paper.
arXiv Detail & Related papers (2021-09-20T18:06:22Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and
Conference Experiment Design [76.40919326501512]
We consider the question: how should reviewers be divided between phases or conditions in order to maximize total assignment similarity?
We empirically show that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
arXiv Detail & Related papers (2021-08-13T19:29:41Z) - Poincare: Recommending Publication Venues via Treatment Effect
Estimation [40.60905158071766]
We use a bias correction method to estimate the potential impact of choosing a publication venue effectively.
We highlight the effectiveness of our method using paper data from computer science conferences.
arXiv Detail & Related papers (2020-10-19T00:50:48Z) - De-anonymization of authors through arXiv submissions during
double-blind review [33.15282901539867]
We investigate the effects of releasing arXiv preprints of papers undergoing a double-blind review process.
We find statistically significant evidence of positive correlation between percentage acceptance and papers with high reputation released on arXiv.
arXiv Detail & Related papers (2020-07-01T01:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.