A Logic-Based Framework for Natural Language Inference in Dutch
- URL: http://arxiv.org/abs/2110.03323v2
- Date: Fri, 8 Oct 2021 08:51:24 GMT
- Title: A Logic-Based Framework for Natural Language Inference in Dutch
- Authors: Lasha Abzianidze and Konstantinos Kogkalidis
- Abstract summary: We present a framework for deriving relations between Dutch sentence pairs.
The proposed framework relies on logic-based reasoning to produce inspectable proofs leading up to inference labels.
We evaluate the reasoning pipeline on the recently created Dutch natural language inference dataset.
- Score: 1.0178220223515955
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a framework for deriving inference relations between Dutch
sentence pairs. The proposed framework relies on logic-based reasoning to
produce inspectable proofs leading up to inference labels; its judgements are
therefore transparent and formally verifiable. At its core, the system is
powered by two ${\lambda}$-calculi, used as syntactic and semantic theories,
respectively. Sentences are first converted to syntactic proofs and terms of
the linear ${\lambda}$-calculus using a choice of two parsers: an Alpino-based
pipeline, and Neural Proof Nets. The syntactic terms are then converted to
semantic terms of the simply typed ${\lambda}$-calculus, via a set of hand
designed type- and term-level transformations. Pairs of semantic terms are then
fed to an automated theorem prover for natural logic which reasons with them
while using lexical relations found in the Open Dutch WordNet. We evaluate the
reasoning pipeline on the recently created Dutch natural language inference
dataset, and achieve promising results, remaining only within a $1.1-3.2{\%}$
performance margin to strong neural baselines. To the best of our knowledge,
the reasoning pipeline is the first logic-based system for Dutch.
Related papers
- Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning [28.111458981621105]
Complex logical reasoning tasks require a long sequence of reasoning, which a large language model (LLM) with chain-of-thought prompting still falls short.
We propose a Compositional First-Order Logic Translation to capture logical semantics hidden in the natural language during translation.
We evaluate the proposed method, dubbed CLOVER, on seven logical reasoning benchmarks and show that it outperforms the previous neurosymbolic approaches.
arXiv Detail & Related papers (2024-10-10T15:42:39Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Refining Labelled Systems for Modal and Constructive Logics with
Applications [0.0]
This thesis serves as a means of transforming the semantics of a modal and/or constructive logic into an 'economical' proof system.
The refinement method connects two proof-theoretic paradigms: labelled and nested sequent calculi.
The introduced refined labelled calculi will be used to provide the first proof-search algorithms for deontic STIT logics.
arXiv Detail & Related papers (2021-07-30T08:27:15Z) - Learning as Abduction: Trainable Natural Logic Theorem Prover for
Natural Language Inference [0.4962199635155534]
We implement a learning method in a theorem prover for natural language.
We show that it improves the performance of the theorem prover on the SICK dataset by 1.4% while still maintaining high precision.
The obtained results are competitive with the state of the art among logic-based systems.
arXiv Detail & Related papers (2020-10-29T19:49:17Z) - RNNs can generate bounded hierarchical languages with optimal memory [113.73133308478612]
We show that RNNs can efficiently generate bounded hierarchical languages that reflect the scaffolding of natural language syntax.
We introduce Dyck-($k$,$m$), the language of well-nested brackets (of $k$ types) and $m$-bounded nesting depth.
We prove that an RNN with $O(m log k)$ hidden units suffices, an exponential reduction in memory, by an explicit construction.
arXiv Detail & Related papers (2020-10-15T04:42:29Z) - Neural Proof Nets [0.8379286663107844]
We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
arXiv Detail & Related papers (2020-09-26T22:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.