Meaning and understanding in large language models
- URL: http://arxiv.org/abs/2310.17407v1
- Date: Thu, 26 Oct 2023 14:06:14 GMT
- Title: Meaning and understanding in large language models
- Authors: Vladim\'ir Havl\'ik
- Abstract summary: Recent developments in the generative large language models (LLMs) of artificial intelligence have led to the belief that traditional philosophical assumptions about machine understanding of language need to be revised.
This article critically evaluates the prevailing tendency to regard machine language performance as mere syntactic manipulation and the simulation of understanding, which is only partial and very shallow, without sufficient referential grounding in the world.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can a machine understand the meanings of natural language? Recent
developments in the generative large language models (LLMs) of artificial
intelligence have led to the belief that traditional philosophical assumptions
about machine understanding of language need to be revised. This article
critically evaluates the prevailing tendency to regard machine language
performance as mere syntactic manipulation and the simulation of understanding,
which is only partial and very shallow, without sufficient referential
grounding in the world. The aim is to highlight the conditions crucial to
attributing natural language understanding to state-of-the-art LLMs, where it
can be legitimately argued that LLMs not only use syntax but also semantics,
their understanding not being simulated but duplicated; and determine how they
ground the meanings of linguistic expressions.
Related papers
- Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency [0.11510009152620666]
We argue that claims regarding linguistic capabilities of Large Language Models (LLMs) are based on at least two unfounded assumptions.
Language completeness assumes that a distinct and complete thing such as a natural language' exists.
The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data.
arXiv Detail & Related papers (2024-07-11T18:06:01Z) - Towards Logically Consistent Language Models via Probabilistic Reasoning [14.317886666902822]
Large language models (LLMs) are a promising venue for natural language understanding and generation tasks.
LLMs are prone to generate non-factual information and to contradict themselves when prompted to reason about beliefs of the world.
We introduce a training objective that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules.
arXiv Detail & Related papers (2024-04-19T12:23:57Z) - The Quo Vadis of the Relationship between Language and Large Language
Models [3.10770247120758]
Large Language Models (LLMs) have come to encourage the adoption of LLMs as scientific models of language.
We identify the most important theoretical and empirical risks brought about by the adoption of scientific models that lack transparency.
We conclude that, at their current stage of development, LLMs hardly offer any explanations for language.
arXiv Detail & Related papers (2023-10-17T10:54:24Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.