An Enactivist account of Mind Reading in Natural Language Understanding
- URL: http://arxiv.org/abs/2111.06179v1
- Date: Thu, 11 Nov 2021 12:46:00 GMT
- Title: An Enactivist account of Mind Reading in Natural Language Understanding
- Authors: Peter Wallis and Bruce Edmonds
- Abstract summary: We apply our understanding of the radical enactivist agenda to a classic AI-hard problem.
The Turing Test assumed that the computer could use language and the challenge was to fake human intelligence.
This paper look again at how natural language understanding might actually work between humans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper we apply our understanding of the radical enactivist agenda to
a classic AI-hard problem. Natural Language Understanding is a sub-field of AI
research that looked easy to the pioneers. Thus the Turing Test, in its
original form, assumed that the computer could use language and the challenge
was to fake human intelligence. It turned out that playing chess and formal
logic were easy compared to the necessary language skills. The techniques of
good old-fashioned AI (GOFAI) assume symbolic representation is the core of
reasoning and human communication consisted of transferring representations
from one mind to another. But by this model one finds that representations
appear in another's mind, without appearing in the intermediary language.
People communicate by mind reading it seems. Systems with speech interfaces
such as Alexa and Siri are of course common but they are limited. Rather than
adding mind reading skills, we introduced a "cheat" that enabled our systems to
fake it. The cheat is simple and only slightly interesting to computer
scientists and not at all interesting to philosophers. However, reading about
the enactivist idea that we "directly perceive" the intentions of others, our
cheat took on a new light and in this paper look again at how natural language
understanding might actually work between humans.
Related papers
- The Hermeneutic Turn of AI: Is the Machine Capable of Interpreting? [0.0]
This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks)
It also addresses the philosophical tradition of hermeneutics to highlight a parallel with this movement and to demystify the idea of human-like AI.
arXiv Detail & Related papers (2024-11-19T13:59:16Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Le Nozze di Giustizia. Interactions between Artificial Intelligence,
Law, Logic, Language and Computation with some case studies in Traffic
Regulations and Health Care [0.0]
An important aim of this paper is to convey some basics of mathematical logic to the legal community working with Artificial Intelligence.
After analysing what AI is, we decide to delimit ourselves to rule-based AI leaving Neural Networks and Machine Learning aside.
We will see how mathematical logic interacts with legal rule-based AI practice.
arXiv Detail & Related papers (2024-02-09T15:43:31Z) - What should I say? -- Interacting with AI and Natural Language
Interfaces [0.0]
The Human-AI Interaction (HAI) sub-field has emerged from the Human-Computer Interaction (HCI) field and aims to examine this very notion.
Prior research suggests that theory of mind representations are crucial to successful and effortless communication, however very little is understood when it comes to how theory of mind representations are established when interacting with AI.
arXiv Detail & Related papers (2024-01-12T05:10:23Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Human Heuristics for AI-Generated Language Are Flawed [8.465228064780744]
We study whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI.
We experimentally demonstrate that these wordings make human judgment of AI-generated language predictable and manipulable.
We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI.
arXiv Detail & Related papers (2022-06-15T03:18:56Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Introducing the Talk Markup Language (TalkML):Adding a little social
intelligence to industrial speech interfaces [0.0]
Natural language understanding is one of the more disappointing failures of AI research.
This paper describes how we have taken ideas from other disciplines and implemented them.
arXiv Detail & Related papers (2021-05-24T14:25:35Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.