Current and Near-Term AI as a Potential Existential Risk Factor
- URL: http://arxiv.org/abs/2209.10604v1
- Date: Wed, 21 Sep 2022 18:56:14 GMT
- Title: Current and Near-Term AI as a Potential Existential Risk Factor
- Authors: Benjamin S. Bucknall and Shiri Dori-Hacohen
- Abstract summary: We problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk.
We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors.
Our main contribution is an exposition of potential AI risk factors and the causal relationships between them.
- Score: 5.1806669555925975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a substantial and ever-growing corpus of evidence and literature
exploring the impacts of Artificial intelligence (AI) technologies on society,
politics, and humanity as a whole. A separate, parallel body of work has
explored existential risks to humanity, including but not limited to that
stemming from unaligned Artificial General Intelligence (AGI). In this paper,
we problematise the notion that current and near-term artificial intelligence
technologies have the potential to contribute to existential risk by acting as
intermediate risk factors, and that this potential is not limited to the
unaligned AGI scenario. We propose the hypothesis that certain
already-documented effects of AI can act as existential risk factors,
magnifying the likelihood of previously identified sources of existential risk.
Moreover, future developments in the coming decade hold the potential to
significantly exacerbate these risk factors, even in the absence of artificial
general intelligence. Our main contribution is a (non-exhaustive) exposition of
potential AI risk factors and the causal relationships between them, focusing
on how AI can affect power dynamics and information security. This exposition
demonstrates that there exist causal pathways from AI systems to existential
risks that do not presuppose hypothetical future AI capabilities.
Related papers
- Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Artificial Intelligence: Arguments for Catastrophic Risk [0.0]
We review two influential arguments purporting to show how AI could pose catastrophic risks.
The first argument -- the Problem of Power-Seeking -- claims that advanced AI systems are likely to engage in dangerous power-seeking behavior.
The second argument claims that the development of human-level AI will unlock rapid further progress.
arXiv Detail & Related papers (2024-01-27T19:34:13Z) - Two Types of AI Existential Risk: Decisive and Accumulative [3.5051464966389116]
This paper contrasts the conventional "decisive AI x-risk hypothesis" with an "accumulative AI x-risk hypothesis"
The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly converge, undermining resilience until a triggering event results in irreversible collapse.
arXiv Detail & Related papers (2024-01-15T17:06:02Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - How Do AI Timelines Affect Existential Risk? [0.0]
Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem.
Since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks.
Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk.
arXiv Detail & Related papers (2022-08-30T15:49:11Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI Research Considerations for Human Existential Safety (ARCHES) [6.40842967242078]
In negative terms, we ask what existential risks humanity might face from AI development in the next century.
Key property of hypothetical AI technologies is introduced, called emphprepotence
A set of auxrefdirtot contemporary research directions are then examined for their potential benefit to existential safety.
arXiv Detail & Related papers (2020-05-30T02:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.