Are AI Capabilities Increasing Exponentially? A Competing Hypothesis
- URL: http://arxiv.org/abs/2602.04836v2
- Date: Fri, 06 Feb 2026 00:41:39 GMT
- Title: Are AI Capabilities Increasing Exponentially? A Competing Hypothesis
- Authors: Haosen Ge, Hamsa Bastani, Osbert Bastani,
- Abstract summary: We argue that the data does not support exponential growth, even in shorter-term horizons.<n>We propose a more complex model that decomposes AI capabilities into base and reasoning capabilities.
- Score: 26.116836335203725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapidly increasing AI capabilities have substantial real-world consequences, ranging from AI safety concerns to labor market consequences. The Model Evaluation & Threat Research (METR) report argues that AI capabilities have exhibited exponential growth since 2019. In this note, we argue that the data does not support exponential growth, even in shorter-term horizons. Whereas the METR study claims that fitting sigmoid/logistic curves results in inflection points far in the future, we fit a sigmoid curve to their current data and find that the inflection point has already passed. In addition, we propose a more complex model that decomposes AI capabilities into base and reasoning capabilities, exhibiting individual rates of improvement. We prove that this model supports our hypothesis that AI capabilities will exhibit an inflection point in the near future. Our goal is not to establish a rigorous forecast of our own, but to highlight the fragility of existing forecasts of exponential growth.
Related papers
- From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models [77.04403907729738]
This survey charts the evolution of uncertainty from a passive diagnostic metric to an active control signal guiding real-time model behavior.<n>We demonstrate how uncertainty is leveraged as an active control signal across three frontiers.<n>This survey argues that mastering the new trend of uncertainty is essential for building the next generation of scalable, reliable, and trustworthy AI.
arXiv Detail & Related papers (2026-01-22T06:21:31Z) - Current Agents Fail to Leverage World Model as Tool for Foresight [61.82522354207919]
Generative world models offer a promising remedy: agents could use them to foresee outcomes before acting.<n>This paper empirically examines whether current agents can leverage such world models as tools to enhance their cognition.
arXiv Detail & Related papers (2026-01-07T13:15:23Z) - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - The wall confronting large language models [0.0]
We show that the scaling laws which determine the performance of large language models severely limit their ability to improve the uncertainty of their predictions.<n>We argue that the very mechanism which fuels much of the learning power of LLMs might well be at the roots of their propensity to produce error pileup.
arXiv Detail & Related papers (2025-07-25T22:48:37Z) - Meek Models Shall Inherit the Earth [1.9647223141071104]
The past decade has seen incredible scaling of AI systems by a few companies, leading to inequality in AI model performance.<n>This paper argues that, contrary to prevailing intuition, the diminishing returns to compute scaling will lead to a convergence of AI model capabilities.
arXiv Detail & Related papers (2025-07-10T17:10:07Z) - Shifting AI Efficiency From Model-Centric to Data-Centric Compression [67.45087283924732]
We argue that the focus of research for AI is shifting from model-centric compression to data-centric compression.<n>Data-centric compression improves AI efficiency by directly compressing the volume of data processed during model training or inference.<n>Our work aims to provide a novel perspective on AI efficiency, synthesize existing efforts, and catalyze innovation to address the challenges posed by ever-increasing context lengths.
arXiv Detail & Related papers (2025-05-25T13:51:17Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - FIMBA: Evaluating the Robustness of AI in Genomics via Feature
Importance Adversarial Attacks [0.0]
This paper demonstrates the vulnerability of AI models often utilized downstream tasks on recognized public genomics datasets.
We undermine model robustness by deploying an attack that focuses on input transformation while mimicking the real data and confusing the model decision-making.
Our empirical findings unequivocally demonstrate a decline in model performance, underscored by diminished accuracy and an upswing in false positives and false negatives.
arXiv Detail & Related papers (2024-01-19T12:04:31Z) - The Impact of Generative Artificial Intelligence on Market Equilibrium: Evidence from a Natural Experiment [19.963531237647103]
Generative artificial intelligence (AI) exhibits the capability to generate creative content akin to human output with greater efficiency and reduced costs.
This paper empirically investigates the impact of generative AI on market equilibrium, in the context of China's leading art outsourcing platform.
Our analysis shows that the advent of generative AI led to a 64% reduction in average prices, yet it simultaneously spurred a 121% increase in order volume and a 56% increase in overall revenue.
arXiv Detail & Related papers (2023-11-13T04:31:53Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.