A Biologically Plausible Parser
- URL: http://arxiv.org/abs/2108.02189v1
- Date: Wed, 4 Aug 2021 17:27:06 GMT
- Title: A Biologically Plausible Parser
- Authors: Daniel Mitropolsky and Michael J. Collins and Christos H.
Papadimitriou
- Abstract summary: We describe a of English effectuated by biologically plausible neurons and synapses.
We demonstrate that this device is capable of correctly parsing reasonably nontrivial sentences.
- Score: 1.8563342761346613
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We describe a parser of English effectuated by biologically plausible neurons
and synapses, and implemented through the Assembly Calculus, a recently
proposed computational framework for cognitive function. We demonstrate that
this device is capable of correctly parsing reasonably nontrivial sentences.
While our experiments entail rather simple sentences in English, our results
suggest that the parser can be extended beyond what we have implemented, to
several directions encompassing much of language. For example, we present a
simple Russian version of the parser, and discuss how to handle recursion,
embedding, and polysemy.
Related papers
- Integrating Supertag Features into Neural Discontinuous Constituent Parsing [0.0]
Traditional views of constituency demand that constituents consist of adjacent words, common in languages like German.
Transition-based parsing produces trees given raw text input using supervised learning on large annotated corpora.
arXiv Detail & Related papers (2024-10-11T12:28:26Z) - Decoupled Vocabulary Learning Enables Zero-Shot Translation from Unseen Languages [55.157295899188476]
neural machine translation systems learn to map sentences of different languages into a common representation space.
In this work, we test this hypothesis by zero-shot translating from unseen languages.
We demonstrate that this setup enables zero-shot translation from entirely unseen languages.
arXiv Detail & Related papers (2024-08-05T07:58:58Z) - A Bionic Natural Language Parser Equivalent to a Pushdown Automaton [0.7783262415147654]
We propose a new bionic natural language (BNLP) based on Assembly Calculus (AC)
In contrast to the original, the BNLP can fully handle all regular languages and Dyck languages.
We formally prove that for any PDA, a Automaton corresponding to BNLP can always be formed.
arXiv Detail & Related papers (2024-04-26T11:50:15Z) - Center-Embedding and Constituency in the Brain and a New
Characterization of Context-Free Languages [2.8932261919131017]
We show that constituency and the processing of dependent sentences can be implemented by neurons and synapses.
Surprisingly, the way we implement center embedding points to a new characterization of context-free languages.
arXiv Detail & Related papers (2022-06-27T12:11:03Z) - Penn-Helsinki Parsed Corpus of Early Modern English: First Parsing
Results and Analysis [2.8749014299466444]
We present the first parsing results on the Penn-Helsinki Parsed Corpus of Early Modern English (PPCEME), a 1.9 million word treebank.
We describe key features of PPCEME that make it challenging for parsing, including a larger and more varied set of function tags than in the Penn Treebank.
arXiv Detail & Related papers (2021-12-15T23:56:21Z) - On The Ingredients of an Effective Zero-shot Semantic Parser [95.01623036661468]
We analyze zero-shot learning by paraphrasing training examples of canonical utterances and programs from a grammar.
We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods.
Our model achieves strong performance on two semantic parsing benchmarks (Scholar, Geo) with zero labeled data.
arXiv Detail & Related papers (2021-10-15T21:41:16Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Zero-Shot Cross-lingual Semantic Parsing [56.95036511882921]
We study cross-lingual semantic parsing as a zero-shot problem without parallel data for 7 test languages.
We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-Logical form paired data.
Our system frames zero-shot parsing as a latent-space alignment problem and finds that pre-trained models can be improved to generate logical forms with minimal cross-lingual transfer penalty.
arXiv Detail & Related papers (2021-04-15T16:08:43Z) - Intrinsic Probing through Dimension Selection [69.52439198455438]
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks.
Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it.
In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted.
arXiv Detail & Related papers (2020-10-06T15:21:08Z) - A Tale of a Probe and a Parser [74.14046092181947]
Measuring what linguistic information is encoded in neural models of language has become popular in NLP.
Researchers approach this enterprise by training "probes" - supervised models designed to extract linguistic structure from another model's output.
One such probe is the structural probe, designed to quantify the extent to which syntactic information is encoded in contextualised word representations.
arXiv Detail & Related papers (2020-05-04T16:57:31Z) - Compositional Languages Emerge in a Neural Iterated Learning Model [27.495624644227888]
compositionality enables natural language to represent complex concepts via a structured combination of simpler ones.
We propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language.
arXiv Detail & Related papers (2020-02-04T15:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.