Center-Embedding and Constituency in the Brain and a New
Characterization of Context-Free Languages
- URL: http://arxiv.org/abs/2206.13217v1
- Date: Mon, 27 Jun 2022 12:11:03 GMT
- Title: Center-Embedding and Constituency in the Brain and a New
Characterization of Context-Free Languages
- Authors: Daniel Mitropolsky, Adiba Ejaz, Mirah Shi, Mihalis Yannakakis,
Christos H. Papadimitriou
- Abstract summary: We show that constituency and the processing of dependent sentences can be implemented by neurons and synapses.
Surprisingly, the way we implement center embedding points to a new characterization of context-free languages.
- Score: 2.8932261919131017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A computational system implemented exclusively through the spiking of neurons
was recently shown capable of syntax, that is, of carrying out the dependency
parsing of simple English sentences. We address two of the most important
questions left open by that work: constituency (the identification of key parts
of the sentence such as the verb phrase) and the processing of dependent
sentences, especially center-embedded ones. We show that these two aspects of
language can also be implemented by neurons and synapses in a way that is
compatible with what is known, or widely believed, about the structure and
function of the language organ. Surprisingly, the way we implement center
embedding points to a new characterization of context-free languages.
Related papers
- Sharing Matters: Analysing Neurons Across Languages and Tasks in LLMs [70.3132264719438]
We aim to fill the research gap by examining how neuron activation is shared across tasks and languages.
We classify neurons into four distinct categories based on their responses to a specific input across different languages.
Our analysis reveals the following insights: (i) the patterns of neuron sharing are significantly affected by the characteristics of tasks and examples; (ii) neuron sharing does not fully correspond with language similarity; (iii) shared neurons play a vital role in generating responses, especially those shared across all languages.
arXiv Detail & Related papers (2024-06-13T16:04:11Z) - Probing Brain Context-Sensitivity with Masked-Attention Generation [87.31930367845125]
We use GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information.
We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text.
arXiv Detail & Related papers (2023-05-23T09:36:21Z) - Information-Restricted Neural Language Models Reveal Different Brain
Regions' Sensitivity to Semantics, Syntax and Context [87.31930367845125]
We trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus.
We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text.
Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions.
arXiv Detail & Related papers (2023-02-28T08:16:18Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - Sentences as connection paths: A neural language architecture of
sentence structure in the brain [0.0]
Article presents a neural language architecture of sentence structure in the brain.
Words remain 'in-situ', hence they are always content-addressable.
Arbitrary and novel sentences (with novel words) can be created with 'neural blackboards' for words and sentences.
arXiv Detail & Related papers (2022-05-19T13:58:45Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Zero-Shot Generalization using Intrinsically Motivated Compositional
Emergent Protocols [0.0]
We show how compositionality can enable agents to not only interact with unseen objects but also transfer skills from one task to another in a zero-shot setting.
We demonstrate how compositionality can enable agents to not only interact with unseen objects but also transfer skills from one task to another in a zero-shot setting.
arXiv Detail & Related papers (2021-05-11T14:20:26Z) - Decomposing lexical and compositional syntax and semantics with deep
language models [82.81964713263483]
The activations of language transformers like GPT2 have been shown to linearly map onto brain activity during speech comprehension.
Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four classes: lexical, compositional, syntactic, and semantic representations.
The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices.
arXiv Detail & Related papers (2021-03-02T10:24:05Z) - Compositional Languages Emerge in a Neural Iterated Learning Model [27.495624644227888]
compositionality enables natural language to represent complex concepts via a structured combination of simpler ones.
We propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language.
arXiv Detail & Related papers (2020-02-04T15:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.