Simulacra as Conscious Exotica
- URL: http://arxiv.org/abs/2402.12422v2
- Date: Thu, 11 Jul 2024 15:42:47 GMT
- Title: Simulacra as Conscious Exotica
- Authors: Murray Shanahan,
- Abstract summary: The advent of conversational agents with increasingly human-like behaviour throws old philosophical questions into new light.
Does it, or could it, ever make sense to speak of AI agents built out of generative language models in terms of consciousness?
This paper attempts to tackle this question while avoiding the pitfalls of dualistic thinking.
- Score: 13.672268920902187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of conversational agents with increasingly human-like behaviour throws old philosophical questions into new light. Does it, or could it, ever make sense to speak of AI agents built out of generative language models in terms of consciousness, given that they are "mere" simulacra of human behaviour, and that what they do can be seen as "merely" role play? Drawing on the later writings of Wittgenstein, this paper attempts to tackle this question while avoiding the pitfalls of dualistic thinking.
Related papers
- The Hermeneutic Turn of AI: Are Machines Capable of Interpreting? [0.0]
This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks)
It also addresses the philosophical tradition of hermeneutics to highlight a parallel with this movement and to demystify the idea of human-like AI.
arXiv Detail & Related papers (2024-11-19T13:59:16Z) - Situated Instruction Following [87.37244711380411]
We propose situated instruction following, which embraces the inherent underspecification and ambiguity of real-world communication.
The meaning of situated instructions naturally unfold through the past actions and the expected future behaviors of the human involved.
Our experiments indicate that state-of-the-art Embodied Instruction Following (EIF) models lack holistic understanding of situated human intention.
arXiv Detail & Related papers (2024-07-15T19:32:30Z) - Social AI and The Equation of Wittgenstein's Language User With Calvino's Literature Machine [0.0]
Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)?
Social AIs are not full-blown language users, but rather more like Italo Calvino's literature machines.
The framework of mortal computation is used to show that social AIs lack the basic autopoiesis needed for narrative faccons de parler.
arXiv Detail & Related papers (2024-05-23T09:51:44Z) - A Philosophical Introduction to Language Models -- Part I: Continuity
With Classic Debates [0.05657375260432172]
This article serves both as a primer on language models for philosophers, and as an opinionated survey of their significance.
We argue that the success of language models challenges several long-held assumptions about artificial neural networks.
This sets the stage for the companion paper (Part II), which turns to novel empirical methods for probing the inner workings of language models.
arXiv Detail & Related papers (2024-01-08T14:12:31Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - The Human-or-Machine Matter: Turing-Inspired Reflections on an Everyday
Issue [4.309879785418976]
We sidestep the question of whether a machine can be labeled intelligent, or can be said to match human capabilities in a given context.
We first draw attention to the seemingly simpler question a person may ask themselves in an everyday interaction: Am I interacting with a human or with a machine?''
arXiv Detail & Related papers (2023-05-07T15:41:11Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - An Enactivist account of Mind Reading in Natural Language Understanding [0.0]
We apply our understanding of the radical enactivist agenda to a classic AI-hard problem.
The Turing Test assumed that the computer could use language and the challenge was to fake human intelligence.
This paper look again at how natural language understanding might actually work between humans.
arXiv Detail & Related papers (2021-11-11T12:46:00Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - Hacking with God: a Common Programming Language of Robopsychology and
Robophilosophy [0.0]
We try to outline a conception in which the robophilosophy and robopsychology will be able to play a similar leading rule in the progress of artificial intelligence.
We outline the idea of a visual artificial language and interactive theorem prover-based computer application called Prime Convo Assistant.
arXiv Detail & Related papers (2020-09-16T11:59:12Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.