Proof-theoretic aspects of NL$\lambda$
- URL: http://arxiv.org/abs/2010.12223v1
- Date: Fri, 23 Oct 2020 08:13:39 GMT
- Title: Proof-theoretic aspects of NL$\lambda$
- Authors: Richard Moot (TEXTE, LIRMM, CNRS)
- Abstract summary: We present a proof-theoretic analysis of the logic NL$lambda$.
We introduce a novel calculus of proof nets and prove it is sound and complete with respect to the sequent calculus for the logic.
We show there is an unexpected convergence of the natural language analyses proposed in the two formalisms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a proof-theoretic analysis of the logic NL$\lambda$ (Barker \&
Shan 2014, Barker 2019). We notably introduce a novel calculus of proof nets
and prove it is sound and complete with respect to the sequent calculus for the
logic. We study decidability and complexity of the logic using this new
calculus, proving a new upper bound for complexity of the logic (showing it is
in NP) and a new lower bound for the class of formal language generated by the
formalism (mildly context-sensitive languages extended with a permutation
closure operation). Finally, thanks to this new calculus, we present a novel
comparison between NL$\lambda$ and the hybrid type-logical grammars of Kubota
\& Levine (2020). We show there is an unexpected convergence of the natural
language analyses proposed in the two formalism. In addition to studying the
proof-theoretic properties of NL$\lambda$, we greatly extends its linguistic
coverage.
Related papers
- A Theory of Interpretable Approximations [61.90216959710842]
We study the idea of approximating a target concept $c$ by a small aggregation of concepts from some base class $mathcalH$.
For any given pair of $mathcalH$ and $c$, exactly one of these cases holds: (i) $c$ cannot be approximated by $mathcalH$ with arbitrary accuracy.
We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-
arXiv Detail & Related papers (2024-06-15T06:43:45Z) - $O_2$ is a multiple context-free grammar: an implementation-, formalisation-friendly proof [0.0]
Classifying languages according to the expressiveness of grammars able to generate them is a fundamental problem in computational linguistics.
This paper analyses the existing proofs from the computational and the proof-theoretical point of views systematically studying whether each proof can lead to a verified (i.e., checked by a proof assistant) algorithm parsing balanced languages via MCFGs.
arXiv Detail & Related papers (2024-05-15T14:51:11Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Simulating Structural Plasticity of the Brain more Scalable than
Expected [69.39201743340747]
Rinke et al. introduced a scalable algorithm that simulates structural plasticity for up to one billion neurons on current hardware using a variant of the Barnes-Hut algorithm.
We show that with careful consideration of the algorithm, the theoretical runtime can even be classified as $O(n log2 n)$.
arXiv Detail & Related papers (2022-10-11T09:02:43Z) - A Logic-Based Framework for Natural Language Inference in Dutch [1.0178220223515955]
We present a framework for deriving relations between Dutch sentence pairs.
The proposed framework relies on logic-based reasoning to produce inspectable proofs leading up to inference labels.
We evaluate the reasoning pipeline on the recently created Dutch natural language inference dataset.
arXiv Detail & Related papers (2021-10-07T10:34:46Z) - A unified logical framework for explanations in classifier systems [10.256904719009471]
We present a modal language of a ceteris paribus nature which supports reasoning about binary input classifiers and their properties.
We study a family of models, axiomatize it as two proof systems regarding the cardinality of the language and show completeness of our axiomatics.
We leverage the language to formalize counterfactual conditional as well as a variety of notions of explanation including abductive, contrastive and counterfactual explanations.
arXiv Detail & Related papers (2021-05-30T07:49:56Z) - Superposition with Lambdas [59.87497175616048]
We design a superposition calculus for a clausal fragment of extensional polymorphic higher-order logic that includes anonymous functions but excludes Booleans.
The inference rules work on $betaeta$-equivalence classes of $lambda$-terms and rely on higher-order unification to achieve refutational completeness.
arXiv Detail & Related papers (2021-01-31T13:53:17Z) - The Logic for a Mildly Context-Sensitive Fragment of the Lambek-Grishin
Calculus [0.0]
We present a proof-theoretic characterization of tree-adjoining languages based on the Lambek-Grishin calculus.
Several new techniques are introduced for the proofs, such as purely structural connectives, usefulness, and a graph-theoretic argument on proof nets for HLG.
arXiv Detail & Related papers (2021-01-10T22:28:05Z) - RNNs can generate bounded hierarchical languages with optimal memory [113.73133308478612]
We show that RNNs can efficiently generate bounded hierarchical languages that reflect the scaffolding of natural language syntax.
We introduce Dyck-($k$,$m$), the language of well-nested brackets (of $k$ types) and $m$-bounded nesting depth.
We prove that an RNN with $O(m log k)$ hidden units suffices, an exponential reduction in memory, by an explicit construction.
arXiv Detail & Related papers (2020-10-15T04:42:29Z) - Neural Proof Nets [0.8379286663107844]
We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
arXiv Detail & Related papers (2020-09-26T22:48:47Z) - Superposition for Lambda-Free Higher-Order Logic [62.997667081978825]
We introduce refutationally complete superposition calculi for intentional and extensional clausal $lambda$-free higher-order logic.
The calculi are parameterized by a term order that need not be fully monotonic, making it possible to employ the $lambda$-free higher-order lexicographic path and Knuth-Bendix orders.
arXiv Detail & Related papers (2020-05-05T12:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.