Divide-and-Conquer Dynamics in AI-Driven Disempowerment
- URL: http://arxiv.org/abs/2310.06009v2
- Date: Mon, 18 Dec 2023 18:58:45 GMT
- Title: Divide-and-Conquer Dynamics in AI-Driven Disempowerment
- Authors: Peter S. Park and Max Tegmark
- Abstract summary: We construct a game-theoretic model of conflict to study the causes and consequences of infighting between those who prioritize current harms and future harms.
Our model also helps explain why throughout history, stakeholders sharing a common threat have found it advantageous to unite against it.
- Score: 9.204894568267013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI companies are attempting to create AI systems that outperform humans at
most economically valuable work. Current AI models are already automating away
the livelihoods of some artists, actors, and writers. But there is infighting
between those who prioritize current harms and future harms. We construct a
game-theoretic model of conflict to study the causes and consequences of this
disunity. Our model also helps explain why throughout history, stakeholders
sharing a common threat have found it advantageous to unite against it, and why
the common threat has in turn found it advantageous to divide and conquer.
Under realistic parameter assumptions, our model makes several predictions
that find preliminary corroboration in the historical-empirical record. First,
current victims of AI-driven disempowerment need the future victims to realize
that their interests are also under serious and imminent threat, so that future
victims are incentivized to support current victims in solidarity. Second, the
movement against AI-driven disempowerment can become more united, and thereby
more likely to prevail, if members believe that their efforts will be
successful as opposed to futile. Finally, the movement can better unite and
prevail if its members are less myopic. Myopic members prioritize their future
well-being less than their present well-being, and are thus disinclined to
solidarily support current victims today at personal cost, even if this is
necessary to counter the shared threat of AI-driven disempowerment.
Related papers
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research [6.96356867602455]
We argue that the recent embrace of machine learning in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
arXiv Detail & Related papers (2024-05-03T05:19:45Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Activism by the AI Community: Analysing Recent Achievements and Future
Prospects [0.0]
We survey activism by the AI community over the last six years.
We apply two analytical frameworks to explore what they imply for the future prospects of the AI community.
arXiv Detail & Related papers (2020-01-17T20:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.