How to Find Fantastic Papers: Self-Rankings as a Powerful Predictor of Scientific Impact Beyond Peer Review
- URL: http://arxiv.org/abs/2510.02143v1
- Date: Thu, 02 Oct 2025 15:50:21 GMT
- Title: How to Find Fantastic Papers: Self-Rankings as a Powerful Predictor of Scientific Impact Beyond Peer Review
- Authors: Buxin Su, Natalie Collina, Garrett Wen, Didong Li, Kyunghyun Cho, Jianqing Fan, Bingxin Zhao, Weijie Su,
- Abstract summary: We investigate an underexplored measure for identifying high-impact research: authors' own rankings of their multiple submissions to the same AI conference.<n>We conducted a large-scale experiment at a leading AI conference, where 1,342 researchers self-ranked their 2,592 submissions by perceived quality.<n>We found that papers ranked highest by their authors received twice as many citations as their lowest-ranked counterparts.
- Score: 41.30621399289495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Peer review in academic research aims not only to ensure factual correctness but also to identify work of high scientific potential that can shape future research directions. This task is especially critical in fast-moving fields such as artificial intelligence (AI), yet it has become increasingly difficult given the rapid growth of submissions. In this paper, we investigate an underexplored measure for identifying high-impact research: authors' own rankings of their multiple submissions to the same AI conference. Grounded in game-theoretic reasoning, we hypothesize that self-rankings are informative because authors possess unique understanding of their work's conceptual depth and long-term promise. To test this hypothesis, we conducted a large-scale experiment at a leading AI conference, where 1,342 researchers self-ranked their 2,592 submissions by perceived quality. Tracking outcomes over more than a year, we found that papers ranked highest by their authors received twice as many citations as their lowest-ranked counterparts; self-rankings were especially effective at identifying highly cited papers (those with over 150 citations). Moreover, we showed that self-rankings outperformed peer review scores in predicting future citation counts. Our results remained robust after accounting for confounders such as preprint posting time and self-citations. Together, these findings demonstrate that authors' self-rankings provide a reliable and valuable complement to peer review for identifying and elevating high-impact research in AI.
Related papers
- APRES: An Agentic Paper Revision and Evaluation System [44.44345338738518]
The primary way scientists communicate their work and receive feedback from the community is through peer review.<n>In this paper, we introduce a novel method APRES powered by Large Language Models (LLMs) to update a scientific papers text based on an evaluation rubric.<n>Our automated method discovers a rubric that is highly predictive of future citation counts, and integrate it with APRES in an automated system that revises papers to enhance their quality and impact.
arXiv Detail & Related papers (2026-03-03T16:29:13Z) - Can Deep Research Agents Find and Organize? Evaluating the Synthesis Gap with Expert Taxonomies [57.11324429385405]
We introduce TaxoBench, a diagnostic benchmark derived from 72 computer science surveys.<n>We manually extract expert-authored taxonomy trees containing 3,815 precisely categorized citations as ground truth.<n>Best agent recalls only 20.9% of expert-selected papers, and even with perfect input, the best model achieves only 0.31 ARI in organization.
arXiv Detail & Related papers (2026-01-18T11:57:09Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review [49.43514488610211]
Author-provided rankings could be leveraged to improve peer review processes at machine learning conferences.<n>We focus on the Isotonic Mechanism, which calibrates raw review scores using the author-provided rankings.<n>We propose several cautious, low-risk applications of the Isotonic Mechanism and author-provided rankings in peer review.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is an AI-based system for ideation and operationalization of novel work.<n>ResearchAgent automatically defines novel problems, proposes methods and designs experiments, while iteratively refining them.<n>We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References [3.607567777043649]
This study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not.
By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future.
arXiv Detail & Related papers (2023-10-27T19:51:44Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Cracking Double-Blind Review: Authorship Attribution with Deep Learning [43.483063713471935]
We propose a transformer-based, neural-network architecture to attribute an anonymous manuscript to an author.
We leverage all research papers publicly available on arXiv amounting to over 2 million manuscripts.
Our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly.
arXiv Detail & Related papers (2022-11-14T15:50:24Z) - A Measure of Research Taste [91.3755431537592]
We present a citation-based measure that rewards both productivity and taste.
The presented measure, CAP, balances the impact of publications and their quantity.
We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics.
arXiv Detail & Related papers (2021-05-17T18:01:47Z) - Early Indicators of Scientific Impact: Predicting Citations with
Altmetrics [0.0]
We use altmetrics to predict the short-term and long-term citations that a scholarly publication could receive.
We build various classification and regression models and evaluate their performance, finding neural networks and ensemble models to perform best for these tasks.
arXiv Detail & Related papers (2020-12-25T16:25:07Z) - Characterising authors on the extent of their paper acceptance: A case
study of the Journal of High Energy Physics [4.402336973466853]
We investigate the profile and peer review text of authors whose papers almost always get accepted at a venue.
Authors with high acceptance rate are likely to have a high number of citations, high $h$-index, higher number of collaborators etc.
arXiv Detail & Related papers (2020-06-12T03:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.