Has ACL Lost Its Crown? A Decade-Long Quantitative Analysis of Scale and Impact Across Leading AI Conferences
- URL: http://arxiv.org/abs/2512.04448v1
- Date: Thu, 04 Dec 2025 04:39:40 GMT
- Title: Has ACL Lost Its Crown? A Decade-Long Quantitative Analysis of Scale and Impact Across Leading AI Conferences
- Authors: Jianglin Ma, Ben Yao, Xiang Li, Yazhou Zhang,
- Abstract summary: We conduct a ten year empirical study spanning seven leading conferences.<n>We build a four dimensional bibliometric framework covering conference scale, core citation statistics,impact dispersion, cross venue and journal influence.<n>We propose a metric Quality Quantity Elasticity, which measures the elasticity of citation growth relative to acceptance growth.
- Score: 8.004720323661601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent surge of language models has rapidly expanded NLP research, driving an exponential rise in submissions and acceptances at major conferences. Yet this growth has been shadowed by escalating concerns over conference quality, e.g., plagiarism, reviewer inexperience and collusive bidding. However, existing studies rely largely on qualitative accounts (e.g., expert interviews, social media discussions, etc.), lacking longitudinal empirical evidence. To fill this gap, we conduct a ten year empirical study spanning seven leading conferences. We build a four dimensional bibliometric framework covering conference scale, core citation statistics,impact dispersion, cross venue and journal influence, etc. Notably, we further propose a metric Quality Quantity Elasticity, which measures the elasticity of citation growth relative to acceptance growth. Our findings show that ML venues sustain dominant and stable impact, NLP venues undergo widening stratification with mixed expansion efficiency, and AI venues exhibit structural decline. This study provides the first decade-long, cross-venue empirical evidence on the evolution of major conferences.
Related papers
- Are AI Capabilities Increasing Exponentially? A Competing Hypothesis [26.116836335203725]
We argue that the data does not support exponential growth, even in shorter-term horizons.<n>We propose a more complex model that decomposes AI capabilities into base and reasoning capabilities.
arXiv Detail & Related papers (2026-02-04T18:28:49Z) - Does GenAI Rewrite How We Write? An Empirical Study on Two-Million Preprints [15.070885964897734]
Generative large language models (LLMs) introduce a further potential disruption by altering how manuscripts are written.<n>This paper addresses the gap through a large-scale analysis of more than 2.1 million preprints spanning 2016--2025 (115 months) across four major repositories.<n>Our findings reveal that LLMs have accelerated submission and revision cycles, modestly increased linguistic complexity, and disproportionately expanded AI-related topics.
arXiv Detail & Related papers (2025-10-18T01:37:40Z) - Beyond Memorization: Reasoning-Driven Synthesis as a Mitigation Strategy Against Benchmark Contamination [77.69093448529455]
We present an empirical study using an infinitely scalable framework to synthesize research-level QA directly from arXiv papers.<n>We evaluate a lack of significant performance decay near knowledge cutoff dates for models of various sizes, developers, and release dates.<n>We hypothesize that the multi-step reasoning required by our synthesis pipeline offered additional complexity that goes deeper than shallow memorization.
arXiv Detail & Related papers (2025-08-26T16:41:37Z) - Position: The Current AI Conference Model is Unsustainable! Diagnosing the Crisis of Centralized AI Conference [40.15553977578515]
This paper offers a data-driven diagnosis of a structural crisis that threatens the foundational goals of scientific dissemination, equity, and community well-being.<n>We identify four key areas of strain: (1) scientifically, with per-author publication rates more than doubling over the past decade to over 4.5 papers annually; (2) environmentally, with the carbon footprint of a single conference exceeding the daily emissions of its host city; and (3) psychologically, with 71% of online community discourse reflecting negative sentiment and 35% referencing mental health concerns.<n>In response, we propose the Community-Federated Conference (CFC) model, which separates peer review, presentation,
arXiv Detail & Related papers (2025-08-06T16:08:27Z) - Shifting AI Efficiency From Model-Centric to Data-Centric Compression [67.45087283924732]
We argue that the focus of research for AI is shifting from model-centric compression to data-centric compression.<n>Data-centric compression improves AI efficiency by directly compressing the volume of data processed during model training or inference.<n>Our work aims to provide a novel perspective on AI efficiency, synthesize existing efforts, and catalyze innovation to address the challenges posed by ever-increasing context lengths.
arXiv Detail & Related papers (2025-05-25T13:51:17Z) - Long-term Causal Inference via Modeling Sequential Latent Confounding [79.18609016557]
Long-term causal inference is an important but challenging problem across various scientific domains.<n>We propose an approach based on the Conditional Additive Equi-Confounding Bias (CAECB) assumption.<n>Our proposed assumption states a functional relationship between sequential confounding biases across temporal short-term outcomes.
arXiv Detail & Related papers (2025-02-26T09:56:56Z) - Optimizing Research Portfolio For Semantic Impact [55.2480439325792]
Citation metrics are widely used to assess academic impact but suffer from social biases.<n>We introduce rXiv Semantic Impact (XSI), a novel framework that predicts research impact.<n>XSI tracks the evolution of research concepts in the academic knowledge graph.
arXiv Detail & Related papers (2025-02-19T17:44:13Z) - Causal Claims in Economics [0.0]
We analyze over 44,000 NBER and CEPR working papers from 1980 to 2023 using a custom language model to construct knowledge graphs.<n>We document a substantial rise in the share of causal claims-from roughly 4% in 1990 to nearly 28% in 2020-reflecting the growing influence of the "credibility revolution"<n>We find that causal narrative complexity strongly predicts both publication in top-5 journals and higher citation counts, whereas non-causal complexity tends to be uncorrelated or negatively associated with these outcomes.
arXiv Detail & Related papers (2025-01-12T17:03:45Z) - Discovering and Reasoning of Causality in the Hidden World with Large Language Models [109.62442253177376]
We develop a new framework termed Causal representatiOn AssistanT (COAT) to propose useful measured variables for causal discovery.<n>Instead of directly inferring causality with Large language models (LLMs), COAT constructs feedback from intermediate causal discovery results to LLMs to refine the proposed variables.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Does the Venue of Scientific Conferences Leverage their Impact? A Large
Scale study on Computer Science Conferences [2.8388425545775386]
We conducted a large scale analysis on the data extracted from 3,838 Computer Science conference series and over 2.5 million papers spanning more than 30 years of research.
To quantify the "touristicity" of a venue we exploited some indicators such as the size of the Wikipedia page for the city hosting the venue and other indexes from reports of the World Economic Forum.
More-over the almost linear correlation with the Tourist Service Infrastructure index attests the specific importance of tourist/accommodation facilities in a given country.
arXiv Detail & Related papers (2021-05-31T09:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.