A Unified Theory of Language
- URL: http://arxiv.org/abs/2508.20109v1
- Date: Thu, 14 Aug 2025 11:09:15 GMT
- Title: A Unified Theory of Language
- Authors: Robert Worden,
- Abstract summary: A unified theory of language combines a Bayesian cognitive linguistic model of language processing.<n>The theory accounts for the major facts of language, including its speed and expressivity.<n>It proposes that language evolved by sexual selection for the display of intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A unified theory of language combines a Bayesian cognitive linguistic model of language processing, with the proposal that language evolved by sexual selection for the display of intelligence. The theory accounts for the major facts of language, including its speed and expressivity, and data on language diversity, pragmatics, syntax and semantics. The computational element of the theory is based on Construction Grammars. These give an account of the syntax and semantics of the worlds languages, using constructions and unification. Two novel elements are added to construction grammars: an account of language pragmatics, and an account of fast, precise language learning. Constructions are represented in the mind as graph like feature structures. People use slow general inference to understand the first few examples they hear of any construction. After that it is learned as a feature structure, and is rapidly applied by unification. All aspects of language (phonology, syntax, semantics, and pragmatics) are seamlessly computed by fast unification; there is no boundary between semantics and pragmatics. This accounts for the major puzzles of pragmatics, and for detailed pragmatic phenomena. Unification is Bayesian maximum likelihood pattern matching. This gives evolutionary continuity between language processing in the human brain, and Bayesian cognition in animal brains. Language is the basis of our mind reading abilities, our cooperation, self esteem and emotions; the foundations of human culture and society.
Related papers
- How important is language for human-like intelligence? [0.0]
We argue that language may hold the key to the emergence of both more general AI systems and central aspects of human intelligence.<n>First, language offers compact representations that make it easier to represent and reason about many abstract concepts.<n>Second, these compressed representations are the iterated output of collective minds.
arXiv Detail & Related papers (2025-09-19T03:45:44Z) - On the Thinking-Language Modeling Gap in Large Language Models [68.83670974539108]
We show that there is a significant gap between the modeling of languages and thoughts.<n>We propose a new prompt technique termed Language-of-Thoughts (LoT) to demonstrate and alleviate this gap.
arXiv Detail & Related papers (2025-05-19T09:31:52Z) - How Linguistics Learned to Stop Worrying and Love the Language Models [17.413438037432414]
We argue that the success of LMs obviates the need for studying linguistic theory and structure.<n>They force us to rethink arguments and ways of thinking that have been foundational in linguistics.<n>We offer an optimistic take on the relationship between language models and linguistics.
arXiv Detail & Related papers (2025-01-28T16:13:19Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.<n>Here, we propose a definition, which we call representational compositionality, that accounts for and extends our intuitions about compositionality.<n>We show how it unifies disparate intuitions from across the literature in both AI and cognitive science.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Universal Syntactic Structures: Modeling Syntax for Various Natural
Languages [0.0]
We aim to provide an explanation for how the human brain might connect words for sentence formation.
A novel approach to modeling syntactic representation is introduced, potentially showing the existence of universal syntactic structures for all natural languages.
arXiv Detail & Related papers (2023-12-28T20:44:26Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Word class representations spontaneously emerge in a deep neural network
trained on next word prediction [7.240611820374677]
How do humans learn language, and can the first language be learned at all?
These fundamental questions are still hotly debated.
In particular, we train an artificial deep neural network on predicting the next word.
We find that the internal representations of nine-word input sequences cluster according to the word class of the tenth word to be predicted as output.
arXiv Detail & Related papers (2023-02-15T11:02:50Z) - Comparing Spoken Languages using Paninian System of Sounds and Finite State Machines [0.0]
We propose an Ecosystem Model for Linguistic Development with Sanskrit at the core.<n>We represent words across languages as state transitions on the phonetic map and construct corresponding Morphological Finite Automata.
arXiv Detail & Related papers (2023-01-29T15:22:10Z) - AUTOLEX: An Automatic Framework for Linguistic Exploration [93.89709486642666]
We propose an automatic framework that aims to ease linguists' discovery and extraction of concise descriptions of linguistic phenomena.
Specifically, we apply this framework to extract descriptions for three phenomena: morphological agreement, case marking, and word order.
We evaluate the descriptions with the help of language experts and propose a method for automated evaluation when human evaluation is infeasible.
arXiv Detail & Related papers (2022-03-25T20:37:30Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - A Theory of Language Learning [0.0]
A theory of language learning is described, which uses Bayesian induction of feature structures (scripts) and script functions.
Each word sense in a language is mentally represented by an m-script, a script function which embodies all the syntax and semantics of the word.
M-scripts form a fully-lexicalised unification grammar, which can support adult language.
arXiv Detail & Related papers (2021-06-06T11:06:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.