Artificial Stupidity
- URL: http://arxiv.org/abs/2007.03616v1
- Date: Thu, 2 Jul 2020 00:37:23 GMT
- Title: Artificial Stupidity
- Authors: Michael Falk
- Abstract summary: Debate about AI is dominated by Frankenstein Syndrome, the fear that AI will become superhuman and escape human control.
This article discusses the roots of Frankenstein Syndrome in Mary Shelley's famous novel of 1818.
It shows that modern intelligent systems can be seen to suffer from'stupidity of judgement'
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Public debate about AI is dominated by Frankenstein Syndrome, the fear that
AI will become superhuman and escape human control. Although superintelligence
is certainly a possibility, the interest it excites can distract the public
from a more imminent concern: the rise of Artificial Stupidity (AS). This
article discusses the roots of Frankenstein Syndrome in Mary Shelley's famous
novel of 1818. It then provides a philosophical framework for analysing the
stupidity of artificial agents, demonstrating that modern intelligent systems
can be seen to suffer from 'stupidity of judgement'. Finally it identifies an
alternative literary tradition that exposes the perils and benefits of AS. In
the writings of Edmund Spenser, Jonathan Swift and E.T.A. Hoffmann, ASs
replace, oppress or seduce their human users. More optimistically, Joseph
Furphy and Laurence Sterne imagine ASs that can serve human intellect as maps
or as pipes. These writers provide a strong counternarrative to the myths that
currently drive the AI debate. They identify ways in which even stupid
artificial agents can evade human control, for instance by appealing to
stereotypes or distancing us from reality. And they underscore the continuing
importance of the literary imagination in an increasingly automated society.
Related papers
- The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Borges and AI [14.879252696060302]
Proponents and opponents grasp AI through the imagery popularised by science fiction.
Will the machine become sentient and rebel against its creators?
This exercise leads to a new perspective that illuminates the relation between language modelling and artificial intelligence.
arXiv Detail & Related papers (2023-09-27T16:15:34Z) - On the ethics of constructing conscious AI [0.0]
Ethics of AI came to be dominated by humanity's collective fear of its creatures.
In real life, with very few exceptions, theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators.
This book chapter takes up this, less commonly considered, ethical angle of AI.
arXiv Detail & Related papers (2023-03-13T19:36:16Z) - Adversarial Policies Beat Superhuman Go AIs [54.15639517188804]
We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it.
Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders.
Our results demonstrate that even superhuman AI systems may harbor surprising failure modes.
arXiv Detail & Related papers (2022-11-01T03:13:20Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Turing Trap: The Promise & Peril of Human-Like Artificial
Intelligence [1.9143819780453073]
The benefits of human-like artificial intelligence include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.
But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans.
As machines become better substitutes for human labor, workers lose economic and political bargaining power.
In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created.
arXiv Detail & Related papers (2022-01-11T21:07:17Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Detecting Synthetic Phenomenology in a Contained Artificial General
Intelligence [0.0]
This work provides an analysis of existing measures of phenomenology through qualia.
It extends those ideas into the context of a contained artificial general intelligence.
arXiv Detail & Related papers (2020-11-06T16:10:38Z) - Achilles Heels for AGI/ASI via Decision Theoretic Adversaries [0.9790236766474201]
It is important to know how advanced systems will make choices and in what ways they may fail.
One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart.
This paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions.
arXiv Detail & Related papers (2020-10-12T02:53:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.