AI Risk Skepticism, A Comprehensive Survey
- URL: http://arxiv.org/abs/2303.03885v1
- Date: Thu, 16 Feb 2023 16:32:38 GMT
- Title: AI Risk Skepticism, A Comprehensive Survey
- Authors: Vemir Michael Ambartsoumean, Roman V. Yampolskiy
- Abstract summary: The study takes into account different points of view on the topic and draws parallels with other forms of skepticism that have shown up in science.
We categorize the various skepticisms regarding the dangers of AI by the type of mistaken thinking involved.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this thorough study, we took a closer look at the skepticism that has
arisen with respect to potential dangers associated with artificial
intelligence, denoted as AI Risk Skepticism. Our study takes into account
different points of view on the topic and draws parallels with other forms of
skepticism that have shown up in science. We categorize the various skepticisms
regarding the dangers of AI by the type of mistaken thinking involved. We hope
this will be of interest and value to AI researchers concerned about the future
of AI and the risks that it may pose. The issues of skepticism and risk in AI
are decidedly important and require serious consideration. By addressing these
issues with the rigor and precision of scientific research, we hope to better
understand the objections we face and to find satisfactory ways to resolve
them.
Related papers
- Artificial Intelligence: Arguments for Catastrophic Risk [0.0]
We review two influential arguments purporting to show how AI could pose catastrophic risks.
The first argument -- the Problem of Power-Seeking -- claims that advanced AI systems are likely to engage in dangerous power-seeking behavior.
The second argument claims that the development of human-level AI will unlock rapid further progress.
arXiv Detail & Related papers (2024-01-27T19:34:13Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Predictable Artificial Intelligence [67.79118050651908]
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
This paper aims to elucidate the questions, hypotheses and challenges relevant to Predictable AI.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - X-Risk Analysis for AI Research [24.78742908726579]
We provide a guide for how to analyze AI x-risk.
First, we review how systems can be made safer today.
Next, we discuss strategies for having long-term impacts on the safety of future systems.
arXiv Detail & Related papers (2022-06-13T00:22:50Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - AI Risk Skepticism [3.198144010381572]
We start by classifying different types of AI Risk skepticism and analyze their root causes.
We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
arXiv Detail & Related papers (2021-05-02T23:29:36Z) - AI Research Considerations for Human Existential Safety (ARCHES) [6.40842967242078]
In negative terms, we ask what existential risks humanity might face from AI development in the next century.
Key property of hypothetical AI technologies is introduced, called emphprepotence
A set of auxrefdirtot contemporary research directions are then examined for their potential benefit to existential safety.
arXiv Detail & Related papers (2020-05-30T02:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.