Sentences as connection paths: A neural language architecture of
sentence structure in the brain
- URL: http://arxiv.org/abs/2206.01725v1
- Date: Thu, 19 May 2022 13:58:45 GMT
- Title: Sentences as connection paths: A neural language architecture of
sentence structure in the brain
- Authors: Frank van der Velde
- Abstract summary: Article presents a neural language architecture of sentence structure in the brain.
Words remain 'in-situ', hence they are always content-addressable.
Arbitrary and novel sentences (with novel words) can be created with 'neural blackboards' for words and sentences.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article presents a neural language architecture of sentence structure in
the brain, in which sentences are temporal connection paths that interconnect
neural structures underlying their words. Words remain 'in-situ', hence they
are always content-addressable. Arbitrary and novel sentences (with novel
words) can be created with 'neural blackboards' for words and sentences. Hence,
the unlimited productivity of natural language can be achieved with a 'fixed'
small world like network structure. The article focuses on the neural
blackboard for sentences. The architecture uses only one 'connection matrix'
for binding all structural relations between words in sentences. Its ability to
represent arbitrary (English) sentences is discussed in detail, based on a
comprehensive analysis of them. The architecture simulates intra-cranial brain
activity observed during sentence processing and fMRI observations related to
sentence complexity and ambiguity. The simulations indicate that the observed
effects relate to global control over the architecture, not to the sentence
structures involved, which predicts higher activity differences related to
complexity and ambiguity with higher comprehension capacity. Other aspects
discussed are the 'intrinsic' sentence structures provided by connection paths
and their relation to scope and inflection, the use of a dependency parser for
control of binding, long-distance dependencies and gaps, question answering,
ambiguity resolution based on backward processing without explicit
backtracking, garden paths, and performance difficulties related to embeddings.
Related papers
- Measuring Meaning Composition in the Human Brain with Composition Scores from Large Language Models [53.840982361119565]
The Composition Score is a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension.
Experimental findings show that this metric correlates with brain clusters associated with word frequency, structural processing, and general sensitivity to words.
arXiv Detail & Related papers (2024-03-07T08:44:42Z) - Anaphoric Structure Emerges Between Neural Networks [3.0518581575184225]
Pragmatics is core to natural language, enabling speakers to communicate efficiently with structures like ellipsis and anaphora.
Despite potential to introduce ambiguity, anaphora is ubiquitous across human language.
We show that languages with anaphoric structures are learnable by neural networks.
arXiv Detail & Related papers (2023-08-15T18:34:26Z) - Information-Restricted Neural Language Models Reveal Different Brain
Regions' Sensitivity to Semantics, Syntax and Context [87.31930367845125]
We trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus.
We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text.
Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions.
arXiv Detail & Related papers (2023-02-28T08:16:18Z) - Center-Embedding and Constituency in the Brain and a New
Characterization of Context-Free Languages [2.8932261919131017]
We show that constituency and the processing of dependent sentences can be implemented by neurons and synapses.
Surprisingly, the way we implement center embedding points to a new characterization of context-free languages.
arXiv Detail & Related papers (2022-06-27T12:11:03Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Seeing Both the Forest and the Trees: Multi-head Attention for Joint
Classification on Different Compositional Levels [15.453888735879525]
In natural languages, words are used in association to construct sentences.
We design a deep neural network architecture that explicitly wires lower and higher linguistic components.
We show that our model, MHAL, learns to simultaneously solve them at different levels of granularity.
arXiv Detail & Related papers (2020-11-01T10:44:46Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.