Artificial General Intelligence, Existential Risk, and Human Risk
Perception
- URL: http://arxiv.org/abs/2311.08698v1
- Date: Wed, 15 Nov 2023 04:57:16 GMT
- Title: Artificial General Intelligence, Existential Risk, and Human Risk
Perception
- Authors: David R. Mandel
- Abstract summary: Artificial general intelligence (AGI) does not yet exist, but it is projected to reach human-level intelligence within roughly the next two decades.
AGI poses an existential risk to humans because there is no reliable method for ensuring that AGI goals stay aligned with human goals.
The perceived risk of a world catastrophe or extinction from AGI is greater than for other existential risks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial general intelligence (AGI) does not yet exist, but given the pace
of technological development in artificial intelligence, it is projected to
reach human-level intelligence within roughly the next two decades. After that,
many experts expect it to far surpass human intelligence and to do so rapidly.
The prospect of superintelligent AGI poses an existential risk to humans
because there is no reliable method for ensuring that AGI goals stay aligned
with human goals. Drawing on publicly available forecaster and opinion data,
the author examines how experts and non-experts perceive risk from AGI. The
findings indicate that the perceived risk of a world catastrophe or extinction
from AGI is greater than for other existential risks. The increase in perceived
risk over the last year is also steeper for AGI than for other existential
threats (e.g., nuclear war or human-caused climate change). That AGI is a
pressing existential risk is something on which experts and non-experts agree,
but the basis for such agreement currently remains obscure.
Related papers
- Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? [37.13209023718946]
Unchecked AI agency poses significant risks to public safety and security.
We discuss how these risks arise from current AI training methods.
We propose a core building block for further advances the development of a non-agentic AI system.
arXiv Detail & Related papers (2025-02-21T18:28:36Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts [0.0]
Research on catastrophic risks and AI alignment is often met with skepticism by experts.
Online debate over the existential risk of AI has begun to turn tribal.
I surveyed 111 AI experts on their familiarity with AI safety concepts.
arXiv Detail & Related papers (2025-01-25T01:51:29Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research [6.96356867602455]
We argue that the recent embrace of machine learning in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
arXiv Detail & Related papers (2024-05-03T05:19:45Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Current and Near-Term AI as a Potential Existential Risk Factor [5.1806669555925975]
We problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk.
We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors.
Our main contribution is an exposition of potential AI risk factors and the causal relationships between them.
arXiv Detail & Related papers (2022-09-21T18:56:14Z) - How Do AI Timelines Affect Existential Risk? [0.0]
Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem.
Since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks.
Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk.
arXiv Detail & Related papers (2022-08-30T15:49:11Z) - On the Unimportance of Superintelligence [0.0]
I analyze the priority for allocating resources to mitigate the risk of superintelligences.
Part I observes that a superintelligence unconnected to the outside world carries no threat.
Part II proposes that biotechnology ranks high in risk among peripheral systems.
arXiv Detail & Related papers (2021-08-30T01:23:25Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.