Close the Gates: How we can keep the future human by choosing not to develop superhuman general-purpose artificial intelligence
- URL: http://arxiv.org/abs/2311.09452v3
- Date: Sat, 14 Sep 2024 14:59:47 GMT
- Title: Close the Gates: How we can keep the future human by choosing not to develop superhuman general-purpose artificial intelligence
- Authors: Anthony Aguirre,
- Abstract summary: In the coming years, humanity may irreversibly cross a threshold by creating general-purpose AI.
This would upend core aspects of human society, present many unprecedented risks, and is likely to be uncontrollable in several senses.
We can choose to not do so, starting by instituting hard limits on the computation that can be used to train and run neural networks.
With these limits in place, AI research and industry can focus on making both narrow and general-purpose AI that humans can understand and control, and from which we can reap enormous benefit.
- Score: 0.20919309330073077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent dramatic advances in artificial intelligence indicate that in the coming years, humanity may irreversibly cross a threshold by creating superhuman general-purpose AI: AI that is better than humans at cognitive tasks in general in the way that AI is currently unbeatable in certain domains. This would upend core aspects of human society, present many unprecedented risks, and is likely to be uncontrollable in several senses. We can choose to not do so, starting by instituting hard limits - placed at the national and international level, and verified by hardware security measures - on the computation that can be used to train and run neural networks. With these limits in place, AI research and industry can focus on making both narrow and general-purpose AI that humans can understand and control, and from which we can reap enormous benefit.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Natural Selection Favors AIs over Humans [18.750116414606698]
We argue that the most successful AI agents will likely have undesirable traits.
If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future.
To counteract these risks and evolutionary forces, we consider interventions such as carefully designing AI agents' intrinsic motivations.
arXiv Detail & Related papers (2023-03-28T17:59:12Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Turing Trap: The Promise & Peril of Human-Like Artificial
Intelligence [1.9143819780453073]
The benefits of human-like artificial intelligence include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.
But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans.
As machines become better substitutes for human labor, workers lose economic and political bargaining power.
In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created.
arXiv Detail & Related papers (2022-01-11T21:07:17Z) - Making AI 'Smart': Bridging AI and Cognitive Science [0.0]
With the integration of cognitive science, the 'artificial' characteristic of Artificial Intelligence might soon be replaced with'smart'
This will help develop more powerful AI systems and simultaneously gives us a better understanding of how the human brain works.
We argue that the possibility of AI taking over human civilization is low as developing such an advanced system requires a better understanding of the human brain first.
arXiv Detail & Related papers (2021-12-31T09:30:44Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - On Controllability of AI [1.370633147306388]
We present arguments as well as supporting evidence indicating that advanced AI can't be fully controlled.
Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security.
arXiv Detail & Related papers (2020-07-19T02:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.