Making first order linear logic a generating grammar
- URL: http://arxiv.org/abs/2206.08955v5
- Date: Thu, 16 Nov 2023 16:13:27 GMT
- Title: Making first order linear logic a generating grammar
- Authors: Sergey Slavnov
- Abstract summary: It is known that different categorial grammars have surface representation in a fragment of first order multiplicative linear logic (MLL1)
We show that the fragment of interest is equivalent to the recently introduced extended type calculus (ETTC)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is known that different categorial grammars have surface representation in
a fragment of first order multiplicative linear logic (MLL1). We show that the
fragment of interest is equivalent to the recently introduced extended tensor
type calculus (ETTC). ETTC is a calculus of specific typed terms, which
represent tuples of strings, more precisely bipartite graphs decorated with
strings. Types are derived from linear logic formulas, and rules correspond to
concrete operations on these string-labeled graphs, so that they can be
conveniently visualized. This provides the above mentioned fragment of MLL1
that is relevant for language modeling not only with some alternative syntax
and intuitive geometric representation, but also with an intrinsic deductive
system, which has been absent.
In this work we consider a non-trivial notationally enriched variation of the
previously introduced ETTC, which allows more concise and transparent
computations. We present both a cut-free sequent calculus and a natural
deduction formalism.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning [87.73401758641089]
Chain-of-thought (CoT) reasoning has improved the performance of modern language models (LMs)
We show that LMs can represent the same family of distributions over strings as probabilistic Turing machines.
arXiv Detail & Related papers (2024-06-20T10:59:02Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - Linearity of Relation Decoding in Transformer Language Models [82.47019600662874]
Much of the knowledge encoded in transformer language models (LMs) may be expressed in terms of relations.
We show that, for a subset of relations, this computation is well-approximated by a single linear transformation on the subject representation.
arXiv Detail & Related papers (2023-08-17T17:59:19Z) - Isotropic Gaussian Processes on Finite Spaces of Graphs [71.26737403006778]
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs.
We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon.
Inspired by applications in chemistry, we illustrate the proposed techniques on a real molecular property prediction task in the small data regime.
arXiv Detail & Related papers (2022-11-03T10:18:17Z) - Learning First-Order Rules with Differentiable Logic Program Semantics [12.360002779872373]
We introduce a differentiable inductive logic programming model, called differentiable first-order rule learner (DFOL)
DFOL finds the correct LPs from relational facts by searching for the interpretable matrix representations of LPs.
Experimental results indicate that DFOL is a precise, robust, scalable, and computationally cheap differentiable ILP model.
arXiv Detail & Related papers (2022-04-28T15:33:43Z) - First order linear logic and tensor type calculus for categorial
grammars [0.0]
We study relationship between first order multiplicative linear logic (MLL1) and extended tensor type calculus (ETTC)
We identify a fragment of MLL1, which seems sufficient for many grammar representations, and establish a correspondence between ETTC and this fragment.
arXiv Detail & Related papers (2021-12-31T00:35:48Z) - A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing [75.36126971685034]
We treat alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.
Our method also approaches that of a model that relies on citetLyu2018AMRPA's segmentation rules, which were hand-crafted to handle individual AMR constructions.
arXiv Detail & Related papers (2020-10-23T21:22:50Z) - Neural Proof Nets [0.8379286663107844]
We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
arXiv Detail & Related papers (2020-09-26T22:48:47Z) - On embedding Lambek calculus into commutative categorial grammars [0.0]
We consider tensor grammars, which are an example of commutative" grammars, based on the classical (rather than intuitionistic) linear logic.
The basic ingredient are tensor terms, which can be seen as encoding and generalizing proof-nets.
arXiv Detail & Related papers (2020-05-20T14:08:56Z) - Traduction des Grammaires Cat\'egorielles de Lambek dans les Grammaires
Cat\'egorielles Abstraites [0.0]
This internship report is to demonstrate that every Lambek Grammar can be, not entirely but efficiently, expressed in Abstract Categorial Grammars (ACG)
The main idea is to transform the type rewriting system of LGs into that of Context-Free Grammars (CFG) by erasing introduction and elimination rules and generating enough axioms so that the cut rule suffices.
Although the underlying algorithm was not fully implemented, this proof provides another argument in favour of the relevance of ACGs in Natural Language Processing.
arXiv Detail & Related papers (2020-01-23T18:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.