Trends in AI Supercomputers
- URL: http://arxiv.org/abs/2504.16026v2
- Date: Wed, 23 Apr 2025 20:08:26 GMT
- Title: Trends in AI Supercomputers
- Authors: Konstantin F. Pilz, James Sanders, Robi Rahman, Lennart Heim,
- Abstract summary: computational performance of AI supercomputers has doubled every nine months.<n>Leading AI supercomputer in 2025 will achieve $2times1022$ 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power.<n>Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%.
- Score: 0.5492530316344587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI's Colossus, used 200,000 AI chips, had a hardware cost of \$7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve $2\times10^{22}$ 16-bit FLOP/s, use two million AI chips, have a hardware cost of \$200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness.
Related papers
- Real-World Gaps in AI Governance Research [0.0]
Drawing on 1,178 safety and reliability papers from 9,439 generative AI papers (January 2020 - March 2025), we compare research outputs of leading AI companies and universities.
We find that corporate AI research increasingly concentrates on pre-deployment areas -- model alignment and testing & evaluation.
Significant research gaps exist in high-risk deployment domains, including healthcare, finance, misinformation, persuasive and addictive features, hallucinations, and copyright.
arXiv Detail & Related papers (2025-04-30T20:44:42Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.<n>Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.<n>We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Europe's AI Imperative - A Pragmatic Blueprint for Global Tech Leadership [16.711035492718356]
Europe is at a make-or-break moment in the global AI race, squeezed between the US and China.<n>We present a sharp, doable strategy that builds upon Europe's strengths and closes gaps.
arXiv Detail & Related papers (2025-02-12T20:46:04Z) - Toward Cross-Layer Energy Optimizations in AI Systems [4.871463967255196]
Energy efficiency is likely to become the gating factor toward adoption of artificial intelligence.
With the pervasive usage of artificial intelligence (AI) and machine learning (ML) tools and techniques, their energy efficiency is likely to become the gating factor toward adoption.
This is because generative AI (GenAI) models are massive energy hogs.
Inference consumes even more energy, because a model trained once serve millions.
arXiv Detail & Related papers (2024-04-10T01:35:17Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Future Computer Systems and Networking Research in the Netherlands: A
Manifesto [137.47124933818066]
We draw attention to CompSys as a vital part of ICT.
Each of the Top Sectors of the Dutch Economy, each route in the National Research Agenda, and each of the UN Sustainable Development Goals pose challenges that cannot be addressed without CompSys advances.
arXiv Detail & Related papers (2022-05-26T11:02:29Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Advancing Computing's Foundation of US Industry & Society [1.443696537295348]
Underlying IT's impact are the dramatic improvements in computer hardware, which deliver performance that unlock new capabilities.
Will we make the next AI leap without 100x better hardware?
This whitepaper argues for a multipronged effort to develop new computing approaches beyond Moore's Law.
arXiv Detail & Related papers (2021-01-04T23:40:45Z) - The De-democratization of AI: Deep Learning and the Compute Divide in
Artificial Intelligence Research [0.2855485723554975]
Large technology firms and elite universities have increased participation in major AI conferences since deep learning's unanticipated rise in 2012.
The effect is concentrated among elite universities, which are ranked 1-50 in the QS World University Rankings.
This increased presence of firms and elite universities in AI research has crowded out mid-tier (QS ranked 201-300) and lower-tier (QS ranked 301-500) universities.
arXiv Detail & Related papers (2020-10-22T15:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.