When Autonomy Breaks: The Hidden Existential Risk of AI
- URL: http://arxiv.org/abs/2503.22151v1
- Date: Fri, 28 Mar 2025 05:10:32 GMT
- Title: When Autonomy Breaks: The Hidden Existential Risk of AI
- Authors: Joshua Krook,
- Abstract summary: I argue that there is an underappreciated risk in the slow and irrevocable decline of human autonomy.<n>What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted.<n>The biggest threat to humanity is not that machines will become more like humans, but that humans will become more like machines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Keep the Future Human: Why and How We Should Close the Gates to AGI and Superintelligence, and What We Should Build Instead [0.20919309330073077]
Advances in AI have transformed AI from a niche academic field to the core business strategy of many of the world's largest companies.<n>This essay argues that we should keep the future human by closing the "gates" to smarter-than-human, autonomous, general-purpose AI.<n>Instead, we should focus on powerful, trustworthy AI tools that can empower individuals and transformatively improve human societies' abilities to do what they do best.
arXiv Detail & Related papers (2023-11-15T23:41:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Natural Selection Favors AIs over Humans [18.750116414606698]
We argue that the most successful AI agents will likely have undesirable traits.
If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future.
To counteract these risks and evolutionary forces, we consider interventions such as carefully designing AI agents' intrinsic motivations.
arXiv Detail & Related papers (2023-03-28T17:59:12Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - The dangers in algorithms learning humans' values and irrationalities [4.606850300668693]
AI systems that are trained on human behavior risk miscategorising human irrationalities as human values.
Knowing human policy allows an AI to become generically more powerful.
It is better for the AI to learn human values directly, rather than learning human biases and then deducing values from behaviour.
arXiv Detail & Related papers (2022-02-28T17:41:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Turing Trap: The Promise & Peril of Human-Like Artificial
Intelligence [1.9143819780453073]
The benefits of human-like artificial intelligence include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.
But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans.
As machines become better substitutes for human labor, workers lose economic and political bargaining power.
In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created.
arXiv Detail & Related papers (2022-01-11T21:07:17Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.