The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI
- URL: http://arxiv.org/abs/2503.07341v1
- Date: Mon, 10 Mar 2025 13:53:39 GMT
- Title: The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI
- Authors: Jakub Growiec, Klaus Prettner,
- Abstract summary: Key focus is the potential emergence of transformative AI (TAI)<n>Discussed scenarios range from human extinction after a misaligned TAI takes over ("AI doom") to unprecedented economic growth and abundance ("post-scarcity")
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in artificial intelligence (AI) have led to a diverse set of predictions about its long-term impact on humanity. A central focus is the potential emergence of transformative AI (TAI), eventually capable of outperforming humans in all economically valuable tasks and fully automating labor. Discussed scenarios range from human extinction after a misaligned TAI takes over ("AI doom") to unprecedented economic growth and abundance ("post-scarcity"). However, the probabilities and implications of these scenarios remain highly uncertain. Here, we organize the various scenarios and evaluate their associated existential risks and economic outcomes in terms of aggregate welfare. Our analysis shows that even low-probability catastrophic outcomes justify large investments in AI safety and alignment research. We find that the optimizing representative individual would rationally allocate substantial resources to mitigate extinction risk; in some cases, she would prefer not to develop TAI at all. This result highlights that current global efforts in AI safety and alignment research are vastly insufficient relative to the scale and urgency of existential risks posed by TAI. Our findings therefore underscore the need for stronger safeguards to balance the potential economic benefits of TAI with the prevention of irreversible harm. Addressing these risks is crucial for steering technological progress toward sustainable human prosperity.
Related papers
- AI Safety Should Prioritize the Future of Work [13.076075926681522]
Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity.
While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society.
arXiv Detail & Related papers (2025-04-16T23:12:30Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - The recessionary pressures of generative AI: A threat to wellbeing [0.0]
Generative Artificial Intelligence (AI) stands as a transformative force that presents a paradox.
It offers unprecedented opportunities for productivity growth while potentially posing significant threats to economic stability and societal wellbeing.
This paper explores the conditions under which both may be true.
arXiv Detail & Related papers (2024-03-26T05:51:05Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Current and Near-Term AI as a Potential Existential Risk Factor [5.1806669555925975]
We problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk.
We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors.
Our main contribution is an exposition of potential AI risk factors and the causal relationships between them.
arXiv Detail & Related papers (2022-09-21T18:56:14Z) - How Do AI Timelines Affect Existential Risk? [0.0]
Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem.
Since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks.
Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk.
arXiv Detail & Related papers (2022-08-30T15:49:11Z) - AI Research Considerations for Human Existential Safety (ARCHES) [6.40842967242078]
In negative terms, we ask what existential risks humanity might face from AI development in the next century.
Key property of hypothetical AI technologies is introduced, called emphprepotence
A set of auxrefdirtot contemporary research directions are then examined for their potential benefit to existential safety.
arXiv Detail & Related papers (2020-05-30T02:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.