Artificial Stupidity
- URL: http://arxiv.org/abs/2007.03616v1
- Date: Thu, 2 Jul 2020 00:37:23 GMT
- Title: Artificial Stupidity
- Authors: Michael Falk
- Abstract summary: Debate about AI is dominated by Frankenstein Syndrome, the fear that AI will become superhuman and escape human control.
This article discusses the roots of Frankenstein Syndrome in Mary Shelley's famous novel of 1818.
It shows that modern intelligent systems can be seen to suffer from'stupidity of judgement'
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Public debate about AI is dominated by Frankenstein Syndrome, the fear that
AI will become superhuman and escape human control. Although superintelligence
is certainly a possibility, the interest it excites can distract the public
from a more imminent concern: the rise of Artificial Stupidity (AS). This
article discusses the roots of Frankenstein Syndrome in Mary Shelley's famous
novel of 1818. It then provides a philosophical framework for analysing the
stupidity of artificial agents, demonstrating that modern intelligent systems
can be seen to suffer from 'stupidity of judgement'. Finally it identifies an
alternative literary tradition that exposes the perils and benefits of AS. In
the writings of Edmund Spenser, Jonathan Swift and E.T.A. Hoffmann, ASs
replace, oppress or seduce their human users. More optimistically, Joseph
Furphy and Laurence Sterne imagine ASs that can serve human intellect as maps
or as pipes. These writers provide a strong counternarrative to the myths that
currently drive the AI debate. They identify ways in which even stupid
artificial agents can evade human control, for instance by appealing to
stereotypes or distancing us from reality. And they underscore the continuing
importance of the literary imagination in an increasingly automated society.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Borges and AI [14.879252696060302]
Proponents and opponents grasp AI through the imagery popularised by science fiction.
Will the machine become sentient and rebel against its creators?
This exercise leads to a new perspective that illuminates the relation between language modelling and artificial intelligence.
arXiv Detail & Related papers (2023-09-27T16:15:34Z) - On the ethics of constructing conscious AI [0.0]
Ethics of AI came to be dominated by humanity's collective fear of its creatures.
In real life, with very few exceptions, theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators.
This book chapter takes up this, less commonly considered, ethical angle of AI.
arXiv Detail & Related papers (2023-03-13T19:36:16Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - The Turing Trap: The Promise & Peril of Human-Like Artificial
Intelligence [1.9143819780453073]
The benefits of human-like artificial intelligence include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.
But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans.
As machines become better substitutes for human labor, workers lose economic and political bargaining power.
In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created.
arXiv Detail & Related papers (2022-01-11T21:07:17Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Detecting Synthetic Phenomenology in a Contained Artificial General
Intelligence [0.0]
This work provides an analysis of existing measures of phenomenology through qualia.
It extends those ideas into the context of a contained artificial general intelligence.
arXiv Detail & Related papers (2020-11-06T16:10:38Z) - Achilles Heels for AGI/ASI via Decision Theoretic Adversaries [0.9790236766474201]
It is important to know how advanced systems will make choices and in what ways they may fail.
One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart.
This paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions.
arXiv Detail & Related papers (2020-10-12T02:53:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.