Is it possible not to cheat on the Turing Test: Exploring the potential
and challenges for true natural language 'understanding' by computers
- URL: http://arxiv.org/abs/2206.14672v2
- Date: Fri, 1 Jul 2022 11:22:50 GMT
- Title: Is it possible not to cheat on the Turing Test: Exploring the potential
and challenges for true natural language 'understanding' by computers
- Authors: Lize Alberts
- Abstract summary: The area of natural language understanding in artificial intelligence claims to have been making great strides.
A comprehensive, interdisciplinary overview of current approaches and remaining challenges is yet to be carried out.
I unite all of these perspectives to unpack the challenges involved in reaching true (human-like) language understanding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent hype surrounding the increasing sophistication of language processing
models has renewed optimism regarding machines achieving a human-like command
of natural language. The area of natural language understanding in artificial
intelligence claims to have been making great strides in this area, however,
the lack of conceptual clarity in how 'understanding' is used in this and other
disciplines have made it difficult to discern how close we actually are. A
comprehensive, interdisciplinary overview of current approaches and remaining
challenges is yet to be carried out. Beyond linguistic knowledge, this requires
considering our species-specific capabilities to categorize, memorize, label
and communicate our (sufficiently similar) embodied and situated experiences.
Moreover, gauging the practical constraints requires critically analyzing the
technical capabilities of current models, as well as deeper philosophical
reflection on theoretical possibilities and limitations. In this paper, I unite
all of these perspectives -- the philosophical, cognitive-linguistic, and
technical -- to unpack the challenges involved in reaching true (human-like)
language understanding. By unpacking the theoretical assumptions inherent in
current approaches, I hope to illustrate how far we actually are from achieving
this goal, if indeed it is the goal.
Related papers
- Open Problems in Mechanistic Interpretability [61.44773053835185]
Mechanistic interpretability aims to understand the computational mechanisms underlying neural networks' capabilities.
Despite recent progress toward these goals, there are many open problems in the field that require solutions.
arXiv Detail & Related papers (2025-01-27T20:57:18Z) - Machines of Meaning [0.0]
We discuss the challenges in the specification of "machines of meaning"
We highlight the need for detachment from anthropocentrism in the study of machines of meaning.
We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models.
arXiv Detail & Related papers (2024-12-10T23:23:28Z) - Toward Transdisciplinary Approaches to Audio Deepfake Discernment [0.0]
This perspective calls for scholars across disciplines to address the challenge of audio deepfake detection and discernment.
We see the promising potential in recent transdisciplinary work that incorporates linguistic knowledge into AI approaches.
arXiv Detail & Related papers (2024-11-08T20:59:25Z) - Expressivity and Speech Synthesis [51.75420054449122]
We outline the methodological advances that brought us so far and sketch out the ongoing efforts to reach that coveted next level of artificial expressivity.
We also discuss the societal implications coupled with rapidly advancing expressive speech synthesis (ESS) technology.
arXiv Detail & Related papers (2024-04-30T08:47:24Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - Beyond Interpretable Benchmarks: Contextual Learning through Cognitive
and Multimodal Perception [0.0]
This study contends that the Turing Test is misinterpreted as an attempt to anthropomorphize computer systems.
It emphasizes tacit learning as a cornerstone of general-purpose intelligence, despite its lack of overt interpretability.
arXiv Detail & Related papers (2022-12-04T08:30:04Z) - The Debate Over Understanding in AI's Large Language Models [0.18275108630751835]
We survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language.
We argue that a new science of intelligence can be developed that will provide insight into distinct modes of understanding.
arXiv Detail & Related papers (2022-10-14T17:04:29Z) - Imagination-Augmented Natural Language Understanding [71.51687221130925]
We introduce an Imagination-Augmented Cross-modal (iACE) to solve natural language understanding tasks.
iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models.
Experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models.
arXiv Detail & Related papers (2022-04-18T19:39:36Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.