ICLR Points: How Many ICLR Publications Is One Paper in Each Area?
- URL: http://arxiv.org/abs/2503.16623v3
- Date: Wed, 26 Mar 2025 13:57:21 GMT
- Title: ICLR Points: How Many ICLR Publications Is One Paper in Each Area?
- Authors: Zhongtang Luo,
- Abstract summary: We introduce the concept of ICLR points, defined as the average effort required to produce one publication at top-tier machine learning conferences.<n>We quantitatively measure and compare the average publication effort across 27 computer science sub-areas.<n>Our analysis reveals significant differences in average publication effort, validating anecdotal perceptions.
- Score: 0.8702432681310401
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Scientific publications significantly impact academic-related decisions in computer science, where top-tier conferences are particularly influential. However, efforts required to produce a publication differ drastically across various subfields. While existing citation-based studies compare venues within areas, cross-area comparisons remain challenging due to differing publication volumes and citation practices. To address this gap, we introduce the concept of ICLR points, defined as the average effort required to produce one publication at top-tier machine learning conferences such as ICLR, ICML, and NeurIPS. Leveraging comprehensive publication data from DBLP (2019--2023) and faculty information from CSRankings, we quantitatively measure and compare the average publication effort across 27 computer science sub-areas. Our analysis reveals significant differences in average publication effort, validating anecdotal perceptions: systems conferences generally require more effort per publication than AI conferences. We further demonstrate the utility of the ICLR points metric by evaluating publication records of universities, current faculties and recent faculty candidates. Our findings highlight how using this metric enables more meaningful cross-area comparisons in academic evaluation processes. Lastly, we discuss the metric's limitations and caution against its misuse, emphasizing the necessity of holistic assessment criteria beyond publication metrics alone.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews [51.453135368388686]
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM)
Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level.
arXiv Detail & Related papers (2024-03-11T21:51:39Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - Analyzing the Impact of Companies on AI Research Based on Publications [1.450405446885067]
We compare academic- and company-authored AI publications published in the last decade.
We find that the citation count an individual publication receives is significantly higher when it is (co-authored) by a company.
arXiv Detail & Related papers (2023-10-31T13:27:04Z) - A Comprehensive Study of Groundbreaking Machine Learning Research:
Analyzing highly cited and impactful publications across six decades [1.6442870218029522]
Machine learning (ML) has emerged as a prominent field of research in computer science and other related fields.
It is crucial to understand the landscape of highly cited publications to identify key trends, influential authors, and significant contributions made thus far.
arXiv Detail & Related papers (2023-08-01T21:43:22Z) - Analyzing the State of Computer Science Research with the DBLP Discovery
Dataset [0.0]
We conduct a scientometric analysis to uncover the implicit patterns hidden in CS metadata.
We introduce the CS-Insights system, an interactive web application to analyze CS publications with various dashboards, filters, and visualizations.
Both D3 and CS-Insights are open-access, and CS-Insights can be easily adapted to other datasets in the future.
arXiv Detail & Related papers (2022-12-01T16:27:42Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Topic Space Trajectories: A case study on machine learning literature [0.0]
We present topic space trajectories, a structure that allows for the comprehensible tracking of research topics.
We show the applicability of our approach on a publication corpus spanning 50 years of machine learning research from 32 publication venues.
Our novel analysis method may be employed for paper classification, for the prediction of future research topics, and for the recommendation of fitting conferences and journals for submitting unpublished work.
arXiv Detail & Related papers (2020-10-23T10:53:42Z) - A Correspondence Analysis Framework for Author-Conference
Recommendations [2.1055643409860743]
We use Correspondence Analysis (CA) to derive appropriate relationships between the entities in question, such as conferences and papers.
Our models show promising results when compared with existing methods such as content-based filtering, collaborative filtering and hybrid filtering.
arXiv Detail & Related papers (2020-01-08T18:52:39Z) - The Demise of Single-Authored Publications in Computer Science: A
Citation Network Analysis [0.0]
I analyze the DBLP database to study role of single author publications in the computer science literature between 1940 and 2019.
I examine the demographics and reception by computing the population fraction, citation statistics, and scores of single author publications over the years.
arXiv Detail & Related papers (2020-01-02T07:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.