On the Unimportance of Superintelligence
- URL: http://arxiv.org/abs/2109.07899v1
- Date: Mon, 30 Aug 2021 01:23:25 GMT
- Title: On the Unimportance of Superintelligence
- Authors: John G. Sotos
- Abstract summary: I analyze the priority for allocating resources to mitigate the risk of superintelligences.
Part I observes that a superintelligence unconnected to the outside world carries no threat.
Part II proposes that biotechnology ranks high in risk among peripheral systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humankind faces many existential threats, but has limited resources to
mitigate them. Choosing how and when to deploy those resources is, therefore, a
fateful decision. Here, I analyze the priority for allocating resources to
mitigate the risk of superintelligences.
Part I observes that a superintelligence unconnected to the outside world
(de-efferented) carries no threat, and that any threat from a harmful
superintelligence derives from the peripheral systems to which it is connected,
e.g., nuclear weapons, biotechnology, etc. Because existentially-threatening
peripheral systems already exist and are controlled by humans, the initial
effects of a superintelligence would merely add to the existing human-derived
risk. This additive risk can be quantified and, with specific assumptions, is
shown to decrease with the square of the number of humans having the capability
to collapse civilization.
Part II proposes that biotechnology ranks high in risk among peripheral
systems because, according to all indications, many humans already have the
technological capability to engineer harmful microbes having pandemic spread.
Progress in biomedicine and computing will proliferate this threat. ``Savant''
software that is not generally superintelligent will underpin much of this
progress, thereby becoming the software responsible for the highest and most
imminent existential risk -- ahead of hypothetical risk from
superintelligences.
The analysis concludes that resources should be preferentially applied to
mitigating the risk of peripheral systems and savant software. Concerns about
superintelligence are at most secondary, and possibly superfluous.
Related papers
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Artificial Intelligence: Arguments for Catastrophic Risk [0.0]
We review two influential arguments purporting to show how AI could pose catastrophic risks.
The first argument -- the Problem of Power-Seeking -- claims that advanced AI systems are likely to engage in dangerous power-seeking behavior.
The second argument claims that the development of human-level AI will unlock rapid further progress.
arXiv Detail & Related papers (2024-01-27T19:34:13Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Close the Gates: How we can keep the future human by choosing not to develop superhuman general-purpose artificial intelligence [0.20919309330073077]
In the coming years, humanity may irreversibly cross a threshold by creating general-purpose AI.
This would upend core aspects of human society, present many unprecedented risks, and is likely to be uncontrollable in several senses.
We can choose to not do so, starting by instituting hard limits on the computation that can be used to train and run neural networks.
With these limits in place, AI research and industry can focus on making both narrow and general-purpose AI that humans can understand and control, and from which we can reap enormous benefit.
arXiv Detail & Related papers (2023-11-15T23:41:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Current and Near-Term AI as a Potential Existential Risk Factor [5.1806669555925975]
We problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk.
We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors.
Our main contribution is an exposition of potential AI risk factors and the causal relationships between them.
arXiv Detail & Related papers (2022-09-21T18:56:14Z) - How Do AI Timelines Affect Existential Risk? [0.0]
Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem.
Since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks.
Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk.
arXiv Detail & Related papers (2022-08-30T15:49:11Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.