Neural Proof Nets
- URL: http://arxiv.org/abs/2009.12702v1
- Date: Sat, 26 Sep 2020 22:48:47 GMT
- Title: Neural Proof Nets
- Authors: Konstantinos Kogkalidis, Michael Moortgat, Richard Moot
- Abstract summary: We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
- Score: 0.8379286663107844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear logic and the linear {\lambda}-calculus have a long standing tradition
in the study of natural language form and meaning. Among the proof calculi of
linear logic, proof nets are of particular interest, offering an attractive
geometric representation of derivations that is unburdened by the bureaucratic
complications of conventional prooftheoretic formats. Building on recent
advances in set-theoretic learning, we propose a neural variant of proof nets
based on Sinkhorn networks, which allows us to translate parsing as the problem
of extracting syntactic primitives and permuting them into alignment. Our
methodology induces a batch-efficient, end-to-end differentiable architecture
that actualizes a formally grounded yet highly efficient neuro-symbolic parser.
We test our approach on {\AE}Thel, a dataset of type-logical derivations for
written Dutch, where it manages to correctly transcribe raw text sentences into
proofs and terms of the linear {\lambda}-calculus with an accuracy of as high
as 70%.
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Towards Autoformalization of Mathematics and Code Correctness:
Experiments with Elementary Proofs [5.045988012508899]
Autoformalization seeks to address this by translating proofs written in natural language into a formal representation that is computer-verifiable via interactive theorem provers.
We introduce a semantic parsing approach, based on the Universal Transformer architecture, that translates elementary mathematical proofs into an equivalent formalization in the language of the Coq interactive theorem prover.
arXiv Detail & Related papers (2023-01-05T17:56:00Z) - On Parsing as Tagging [66.31276017088477]
We show how to reduce tetratagging, a state-of-the-art constituency tagger, to shift--reduce parsing.
We empirically evaluate our taxonomy of tagging pipelines with different choices of linearizers, learners, and decoders.
arXiv Detail & Related papers (2022-11-14T13:37:07Z) - A Neural Model for Regular Grammar Induction [8.873449722727026]
We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples.
Our model is fully explainable, its intermediate results are directly interpretable as partial parses, and it can be used to learn arbitrary regular grammars when provided with sufficient data.
arXiv Detail & Related papers (2022-09-23T14:53:23Z) - A Logic-Based Framework for Natural Language Inference in Dutch [1.0178220223515955]
We present a framework for deriving relations between Dutch sentence pairs.
The proposed framework relies on logic-based reasoning to produce inspectable proofs leading up to inference labels.
We evaluate the reasoning pipeline on the recently created Dutch natural language inference dataset.
arXiv Detail & Related papers (2021-10-07T10:34:46Z) - Extracting Grammars from a Neural Network Parser for Anomaly Detection
in Unknown Formats [79.6676793507792]
Reinforcement learning has recently shown promise as a technique for training an artificial neural network to parse sentences in some unknown format.
This paper presents procedures for extracting production rules from the neural network, and for using these rules to determine whether a given sentence is nominal or anomalous.
arXiv Detail & Related papers (2021-07-30T23:10:24Z) - Refining Labelled Systems for Modal and Constructive Logics with
Applications [0.0]
This thesis serves as a means of transforming the semantics of a modal and/or constructive logic into an 'economical' proof system.
The refinement method connects two proof-theoretic paradigms: labelled and nested sequent calculi.
The introduced refined labelled calculi will be used to provide the first proof-search algorithms for deontic STIT logics.
arXiv Detail & Related papers (2021-07-30T08:27:15Z) - The Logic for a Mildly Context-Sensitive Fragment of the Lambek-Grishin
Calculus [0.0]
We present a proof-theoretic characterization of tree-adjoining languages based on the Lambek-Grishin calculus.
Several new techniques are introduced for the proofs, such as purely structural connectives, usefulness, and a graph-theoretic argument on proof nets for HLG.
arXiv Detail & Related papers (2021-01-10T22:28:05Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.