Against racing to AGI: Cooperation, deterrence, and catastrophic risks
- URL: http://arxiv.org/abs/2507.21839v1
- Date: Tue, 29 Jul 2025 14:17:08 GMT
- Title: Against racing to AGI: Cooperation, deterrence, and catastrophic risks
- Authors: Leonard Dung, Max Hellrigel-Holderbaum,
- Abstract summary: AGI Racing is the view that it is in the self-interest of major actors in AI development, especially powerful nations, to accelerate their frontier AI development.<n>We argue against AGI Racing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AGI Racing is the view that it is in the self-interest of major actors in AI development, especially powerful nations, to accelerate their frontier AI development to build highly capable AI, especially artificial general intelligence (AGI), before competitors have a chance. We argue against AGI Racing. First, the downsides of racing to AGI are much higher than portrayed by this view. Racing to AGI would substantially increase catastrophic risks from AI, including nuclear instability, and undermine the prospects of technical AI safety research to be effective. Second, the expected benefits of racing may be lower than proponents of AGI Racing hold. In particular, it is questionable whether winning the race enables complete domination over losers. Third, international cooperation and coordination, and perhaps carefully crafted deterrence measures, constitute viable alternatives to racing to AGI which have much smaller risks and promise to deliver most of the benefits that racing to AGI is supposed to provide. Hence, racing to AGI is not in anyone's self-interest as other actions, particularly incentivizing and seeking international cooperation around AI issues, are preferable.
Related papers
- The Singapore Consensus on Global AI Safety Research Priorities [128.58674892183657]
"2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety" aimed to support research in this space.<n>Report builds on the International AI Safety Report chaired by Yoshua Bengio and backed by 33 governments.<n>Report organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment) and challenges with monitoring and intervening after deployment (Control)
arXiv Detail & Related papers (2025-06-25T17:59:50Z) - Exploiting AI for Attacks: On the Interplay between Adversarial AI and Offensive AI [18.178555463870214]
AI as a target of attacks (Adversarial AI') and AI as a means to launch attacks on any target (Offensive AI')<n>This article explores two emerging AI-related threats and the interplay between them.
arXiv Detail & Related papers (2025-06-14T14:21:01Z) - AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions [2.07180164747172]
Humanity appears to be on course to soon develop AI systems that substantially outperform human experts.<n>We believe the default trajectory has a high likelihood of catastrophe, including human extinction.<n>Risks come from failure to control powerful AI systems, misuse of AI by malicious rogue actors, war between great powers, and authoritarian lock-in.
arXiv Detail & Related papers (2025-05-07T17:35:36Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.<n>Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.<n>We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Fair Play in the Fast Lane: Integrating Sportsmanship into Autonomous Racing Systems [44.52724799426566]
This paper introduces a bi-level game-theoretic framework to integrate sportsmanship (SPS) into versus racing.<n>At the high level, we model racing intentions using a Stackelberg game, where Monte Carlo Tree Search (MCTS) is employed to derive optimal strategies.<n>At the low level, vehicle interactions are formulated as a Generalized Nash Equilibrium Problem (GNEP), ensuring that all agents follow sportsmanship constraints while optimizing their trajectories.
arXiv Detail & Related papers (2025-03-04T10:14:19Z) - Who's Driving? Game Theoretic Path Risk of AGI Development [0.0]
Who controls the development of Artificial General Intelligence (AGI) might matter less than how we handle the fight for control itself.<n>We formalize this "steering wheel problem" as humanity's greatest near-term existential risk may stem not from misaligned AGI, but from the dynamics of competing to develop it.<n>We present a game theoretic framework modeling AGI development dynamics and prove conditions for sustainable cooperative equilibria.
arXiv Detail & Related papers (2025-01-25T17:13:12Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.