Grammar Equations
- URL: http://arxiv.org/abs/2106.07485v1
- Date: Mon, 14 Jun 2021 15:16:09 GMT
- Title: Grammar Equations
- Authors: Bob Coecke and Vincent Wang
- Abstract summary: In this paper we also provide wirings within words.
This will enable us to identify grammatical constructs that we expect to be either equal or closely related.
We give a nogo-theorem for the fact that our wirings for words make no sense for preordered monoids.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diagrammatically speaking, grammatical calculi such as pregroups provide
wires between words in order to elucidate their interactions, and this enables
one to verify grammatical correctness of phrases and sentences. In this paper
we also provide wirings within words. This will enable us to identify
grammatical constructs that we expect to be either equal or closely related.
Hence, our work paves the way for a new theory of grammar, that provides novel
`grammatical truths'. We give a nogo-theorem for the fact that our wirings for
words make no sense for preordered monoids, the form which grammatical calculi
usually take. Instead, they require diagrams -- or equivalently, (free)
monoidal categories.
Related papers
- Principles of semantic and functional efficiency in grammatical patterning [1.6267479602370545]
Grammatical features such as number and gender serve two central functions in human languages.
Number and gender encode salient semantic attributes like numerosity and animacy, but offload sentence processing cost by predictably linking words together.
Grammars exhibit consistent organizational patterns across diverse languages, invariably rooted in a semantic foundation.
arXiv Detail & Related papers (2024-10-21T10:49:54Z) - Conjunctive categorial grammars and Lambek grammars with additives [49.1574468325115]
A new family of categorial grammars is proposed, defined by enriching basic categorial grammars with a conjunction operation.
It is also shown that categorial grammars with conjunction can be naturally embedded into the Lambek calculus with conjunction and disjunction operations.
arXiv Detail & Related papers (2024-05-26T18:53:56Z) - Sparse Logistic Regression with High-order Features for Automatic Grammar Rule Extraction from Treebanks [6.390468088226495]
We propose a new method to extract and explore significant fine-grained grammar patterns from treebanks.
We extract descriptions and rules across different languages for two linguistic phenomena, agreement and word order.
Our method captures both well-known and less well-known significant grammar rules in Spanish, French, and Wolof.
arXiv Detail & Related papers (2024-03-26T09:39:53Z) - Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels [52.940886615390106]
Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
arXiv Detail & Related papers (2023-06-24T10:07:01Z) - Learning grammar with a divide-and-concur neural network [4.111899441919164]
We implement a divide-and-concur iterative projection approach to context-free grammar inference.
Our method requires a relatively small number of discrete parameters, making the inferred grammar directly interpretable.
arXiv Detail & Related papers (2022-01-18T22:42:43Z) - A Syntax-Guided Grammatical Error Correction Model with Dependency Tree
Correction [83.14159143179269]
Grammatical Error Correction (GEC) is a task of detecting and correcting grammatical errors in sentences.
We propose a syntax-guided GEC model (SG-GEC) which adopts the graph attention mechanism to utilize the syntactic knowledge of dependency trees.
We evaluate our model on public benchmarks of GEC task and it achieves competitive results.
arXiv Detail & Related papers (2021-11-05T07:07:48Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - VLGrammar: Grounded Grammar Induction of Vision and Language [86.88273769411428]
We study grounded grammar induction of vision and language in a joint learning framework.
We present VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously.
arXiv Detail & Related papers (2021-03-24T04:05:08Z) - Word Frequency Does Not Predict Grammatical Knowledge in Language Models [2.1984302611206537]
We investigate whether there are systematic sources of variation in the language models' accuracy.
We find that certain nouns are systematically understood better than others, an effect which is robust across grammatical tasks and different language models.
We find that a novel noun's grammatical properties can be few-shot learned from various types of training data.
arXiv Detail & Related papers (2020-10-26T19:51:36Z) - On the Relationships Between the Grammatical Genders of Inanimate Nouns
and Their Co-Occurring Adjectives and Verbs [57.015586483981885]
We use large-scale corpora in six different gendered languages.
We find statistically significant relationships between the grammatical genders of inanimate nouns and the verbs that take those nouns as direct objects, indirect objects, and as subjects.
arXiv Detail & Related papers (2020-05-03T22:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.