Universal Syntactic Structures: Modeling Syntax for Various Natural
Languages
- URL: http://arxiv.org/abs/2402.01641v1
- Date: Thu, 28 Dec 2023 20:44:26 GMT
- Title: Universal Syntactic Structures: Modeling Syntax for Various Natural
Languages
- Authors: Min K. Kim, Hafu Takero, Sara Fedovik
- Abstract summary: We aim to provide an explanation for how the human brain might connect words for sentence formation.
A novel approach to modeling syntactic representation is introduced, potentially showing the existence of universal syntactic structures for all natural languages.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We aim to provide an explanation for how the human brain might connect words
for sentence formation. A novel approach to modeling syntactic representation
is introduced, potentially showing the existence of universal syntactic
structures for all natural languages. As the discovery of DNA's double helix
structure shed light on the inner workings of genetics, we wish to introduce a
basic understanding of how language might work in the human brain. It could be
the brain's way of encoding and decoding knowledge. It also brings some insight
into theories in linguistics, psychology, and cognitive science. After looking
into the logic behind universal syntactic structures and the methodology of the
modeling technique, we attempt to analyze corpora that showcase universality in
the language process of different natural languages such as English and Korean.
Lastly, we discuss the critical period hypothesis, universal grammar, and a few
other assertions on language for the purpose of advancing our understanding of
the human brain.
Related papers
- Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human
Narrative Processing [0.0]
Previous studies have demonstrated that the features of language models can be mapped to fMRI brain activity.
This raises the question: is there a commonality between information processing in language models and the human brain?
To estimate information flow patterns in a language model, we examined the causal relationships between different layers.
arXiv Detail & Related papers (2023-11-17T10:09:12Z) - Structural Priming Demonstrates Abstract Grammatical Representations in
Multilingual Language Models [6.845954748361076]
We find evidence for abstract monolingual and crosslingual grammatical representations in large language models.
Results demonstrate that grammatical representations in multilingual language models are not only similar across languages, but they can causally influence text produced in different languages.
arXiv Detail & Related papers (2023-11-15T18:39:56Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Verbal behavior without syntactic structures: beyond Skinner and Chomsky [0.0]
We must rediscover the extent to which language is like any other human behavior.
Recent psychological, computational, neurobiological, and evolutionary insights into the shaping and structure of behavior may point us toward a new, viable account of language.
arXiv Detail & Related papers (2023-03-11T00:01:21Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - Same Neurons, Different Languages: Probing Morphosyntax in Multilingual
Pre-trained Models [84.86942006830772]
We conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar.
We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe.
arXiv Detail & Related papers (2022-05-04T12:22:31Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D
World [86.21137454228848]
We factorize PIGLeT into a physical dynamics model, and a separate language model.
PIGLeT can read a sentence, simulate neurally what might happen next, and then communicate that result through a literal symbolic representation.
It is able to correctly forecast "what happens next" given an English sentence over 80% of the time, outperforming a 100x larger, text-to-text approach by over 10%.
arXiv Detail & Related papers (2021-06-01T02:32:12Z) - Learning Music Helps You Read: Using Transfer to Study Linguistic
Structure in Language Models [27.91397366776451]
Training LSTMs on latent structure (MIDI music or Java code) improves test performance on natural language.
Experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological similarity to the training language.
arXiv Detail & Related papers (2020-04-30T06:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.