How important is language for human-like intelligence?
- URL: http://arxiv.org/abs/2509.15560v1
- Date: Fri, 19 Sep 2025 03:45:44 GMT
- Title: How important is language for human-like intelligence?
- Authors: Gary Lupyan, Hunter Gentry, Martin Zettersten,
- Abstract summary: We argue that language may hold the key to the emergence of both more general AI systems and central aspects of human intelligence.<n>First, language offers compact representations that make it easier to represent and reason about many abstract concepts.<n>Second, these compressed representations are the iterated output of collective minds.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use language to communicate our thoughts. But is language merely the expression of thoughts, which are themselves produced by other, nonlinguistic parts of our minds? Or does language play a more transformative role in human cognition, allowing us to have thoughts that we otherwise could (or would) not have? Recent developments in artificial intelligence (AI) and cognitive science have reinvigorated this old question. We argue that language may hold the key to the emergence of both more general AI systems and central aspects of human intelligence. We highlight two related properties of language that make it such a powerful tool for developing domain--general abilities. First, language offers compact representations that make it easier to represent and reason about many abstract concepts (e.g., exact numerosity). Second, these compressed representations are the iterated output of collective minds. In learning a language, we learn a treasure trove of culturally evolved abstractions. Taken together, these properties mean that a sufficiently powerful learning system exposed to language--whether biological or artificial--learns a compressed model of the world, reverse engineering many of the conceptual and causal structures that support human (and human-like) thought.
Related papers
- A Unified Theory of Language [0.0]
A unified theory of language combines a Bayesian cognitive linguistic model of language processing.<n>The theory accounts for the major facts of language, including its speed and expressivity.<n>It proposes that language evolved by sexual selection for the display of intelligence.
arXiv Detail & Related papers (2025-08-14T11:09:15Z) - Natural, Artificial, and Human Intelligences [0.046040036610482664]
We think that, together with language, there are four essential ingredients, which can be summarised as invention, capacity for complex inference, embodiment, and self-awareness.<n>For the most unique accomplishments of human intelligence, we think that, together with language, there are four essential ingredients, which can be summarised as invention, capacity for complex inference, embodiment, and self-awareness.
arXiv Detail & Related papers (2025-06-02T19:11:49Z) - A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Hum4n L4ngu4ge and the W0rld behind W0rds? [2.7342737448775534]
Large Language Models (LLMs) have been linked to claims about human-like linguistic performance.
We analyze the contribution of LLMs as theoretically informative representations of a target cognitive system.
We evaluate the models' ability to see the bigger picture, through top-down feedback from higher levels of processing.
arXiv Detail & Related papers (2023-07-26T18:58:53Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Building Human-like Communicative Intelligence: A Grounded Perspective [1.0152838128195465]
After making astounding progress in language learning, AI systems seem to approach the ceiling that does not reflect important aspects of human communicative capacities.
This paper suggests that the dominant cognitively-inspired AI directions, based on nativist and symbolic paradigms, lack necessary substantiation and concreteness to guide progress in modern AI.
I propose a list of concrete, implementable components for building "grounded" linguistic intelligence.
arXiv Detail & Related papers (2022-01-02T01:43:24Z) - An Enactivist account of Mind Reading in Natural Language Understanding [0.0]
We apply our understanding of the radical enactivist agenda to a classic AI-hard problem.
The Turing Test assumed that the computer could use language and the challenge was to fake human intelligence.
This paper look again at how natural language understanding might actually work between humans.
arXiv Detail & Related papers (2021-11-11T12:46:00Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.