The economic alignment problem of artificial intelligence
- URL: http://arxiv.org/abs/2602.21843v1
- Date: Wed, 25 Feb 2026 12:22:46 GMT
- Title: The economic alignment problem of artificial intelligence
- Authors: Daniel W. O'Neill, Stefano Vrizzi, Noemi Luna Carmeno, Felix Creutzig, Jefim Vogel,
- Abstract summary: We argue that developing advanced AI inside a growth-based system is likely to increase social, environmental, and existential risks.<n>We show that post-growth research offers concepts and policies that could substantially reduce AI risks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) is advancing exponentially and is likely to have profound impacts on human wellbeing, social equity, and environmental sustainability. Here we argue that the "alignment problem" in AI research is also an economic alignment problem, as developing advanced AI inside a growth-based system is likely to increase social, environmental, and existential risks. We show that post-growth research offers concepts and policies that could substantially reduce AI risks, such as by replacing optimisation with satisficing, using the Doughnut of social and planetary boundaries to guide development, and curbing systemic rebound with resource caps. We propose governance and business reforms that treat AI as a commons and prioritise tool-like autonomy-enhancing systems over agentic AI. Finally, we argue that the development of artificial general intelligence (AGI) may require a new economics, for which post-growth scholarship provides a strong foundation.
Related papers
- AI+HW 2035: Shaping the Next Decade [135.53570243498987]
Artificial intelligence (AI) and hardware (HW) are advancing at unprecedented rates, yet their trajectories have become inseparably intertwined.<n>This vision paper lays out a 10-year roadmap for AI+HW co-design and co-development, spanning algorithms, architectures, systems, and sustainability.<n>We identify key challenges and opportunities, candidly assess potential obstacles and pitfalls, and propose integrated solutions.
arXiv Detail & Related papers (2026-03-05T14:36:33Z) - Should AI Become an Intergenerational Civil Right? [2.7937298764423573]
We argue that access to AI should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection.<n>We propose recognizing access to AI as an emphIntergenerational Civil Right, establishing a legal and ethical framework that safeguards present-day inclusion and the rights of future generations.
arXiv Detail & Related papers (2025-12-09T20:22:16Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - Open and Sustainable AI: challenges, opportunities and the road ahead in the life sciences (October 2025 -- Version 2) [49.142289900583705]
We review the increased erosion of trust in AI research outputs, driven by the issues of poor reusability.<n>We discuss the fragmented components of the AI ecosystem and lack of guiding pathways to best support Open and Sustainable AI.<n>Our work connects researchers with relevant AI resources, facilitating the implementation of sustainable, reusable and transparent AI.
arXiv Detail & Related papers (2025-05-22T12:52:34Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.<n>We urge a complementary approach: increasing societal adaptation to advanced AI.<n>We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Artificial Intelligence for Real Sustainability? -- What is Artificial
Intelligence and Can it Help with the Sustainability Transformation? [0.0]
This article briefly explains, classifies, and theorises AI technology.
It then politically contextualises that analysis in light of the sustainability discourse.
It argues that AI can play a small role in moving towards sustainable societies.
arXiv Detail & Related papers (2023-06-15T15:40:00Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A Survey on AI Sustainability: Emerging Trends on Learning Algorithms
and Research Challenges [35.317637957059944]
We review major trends in machine learning approaches that can address the sustainability problem of AI.
We will highlight the major limitations of existing studies and propose potential research challenges and directions for the development of next generation of sustainable AI techniques.
arXiv Detail & Related papers (2022-05-08T09:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.