Higher-Order DisCoCat (Peirce-Lambek-Montague semantics)
- URL: http://arxiv.org/abs/2311.17813v1
- Date: Wed, 29 Nov 2023 17:04:15 GMT
- Title: Higher-Order DisCoCat (Peirce-Lambek-Montague semantics)
- Authors: Alexis Toumi and Giovanni de Felice
- Abstract summary: We propose a new definition of higher-order DisCoCat (categorical compositional distributional) models.
Our models can be seen as a variant of Montague semantics based on a calculus where the primitives act on string diagrams rather than logical formulae.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new definition of higher-order DisCoCat (categorical
compositional distributional) models where the meaning of a word is not a
diagram, but a diagram-valued higher-order function. Our models can be seen as
a variant of Montague semantics based on a lambda calculus where the primitives
act on string diagrams rather than logical formulae. As a special case, we show
how to translate from the Lambek calculus into Peirce's system beta for
first-order logic. This allows us to give a purely diagrammatic treatment of
higher-order and non-linear processes in natural language semantics: adverbs,
prepositions, negation and quantifiers. The theoretical definition presented in
this article comes with a proof-of-concept implementation in DisCoPy, the
Python library for string diagrams.
Related papers
- Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Novel Ordering-based Approaches for Causal Structure Learning in the
Presence of Unobserved Variables [22.201414668050123]
We advocate for a novel order called removable order (r-order) as they are advantageous over c-orders for structure learning.
We evaluate the performance and the scalability of our proposed approaches on both real-world and randomly generated networks.
arXiv Detail & Related papers (2022-08-14T23:09:55Z) - Few-Shot Semantic Parsing with Language Models Trained On Code [52.23355024995237]
We find that Codex performs better at semantic parsing than equivalent GPT-3 models.
We find that unlike GPT-3, Codex performs similarly when targeting meaning representations directly, perhaps as meaning representations used in semantic parsing are structured similar to code.
arXiv Detail & Related papers (2021-12-16T08:34:06Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - Superposition with Lambdas [59.87497175616048]
We design a superposition calculus for a clausal fragment of extensional polymorphic higher-order logic that includes anonymous functions but excludes Booleans.
The inference rules work on $betaeta$-equivalence classes of $lambda$-terms and rely on higher-order unification to achieve refutational completeness.
arXiv Detail & Related papers (2021-01-31T13:53:17Z) - On embedding Lambek calculus into commutative categorial grammars [0.0]
We consider tensor grammars, which are an example of commutative" grammars, based on the classical (rather than intuitionistic) linear logic.
The basic ingredient are tensor terms, which can be seen as encoding and generalizing proof-nets.
arXiv Detail & Related papers (2020-05-20T14:08:56Z) - Superposition for Lambda-Free Higher-Order Logic [62.997667081978825]
We introduce refutationally complete superposition calculi for intentional and extensional clausal $lambda$-free higher-order logic.
The calculi are parameterized by a term order that need not be fully monotonic, making it possible to employ the $lambda$-free higher-order lexicographic path and Knuth-Bendix orders.
arXiv Detail & Related papers (2020-05-05T12:10:21Z) - Traduction des Grammaires Cat\'egorielles de Lambek dans les Grammaires
Cat\'egorielles Abstraites [0.0]
This internship report is to demonstrate that every Lambek Grammar can be, not entirely but efficiently, expressed in Abstract Categorial Grammars (ACG)
The main idea is to transform the type rewriting system of LGs into that of Context-Free Grammars (CFG) by erasing introduction and elimination rules and generating enough axioms so that the cut rule suffices.
Although the underlying algorithm was not fully implemented, this proof provides another argument in favour of the relevance of ACGs in Natural Language Processing.
arXiv Detail & Related papers (2020-01-23T18:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.