TabVer: Tabular Fact Verification with Natural Logic
- URL: http://arxiv.org/abs/2411.01093v1
- Date: Sat, 02 Nov 2024 00:36:34 GMT
- Title: TabVer: Tabular Fact Verification with Natural Logic
- Authors: Rami Aly, Andreas Vlachos,
- Abstract summary: We propose a set-theoretic interpretation of numerals and arithmetic functions in the context of natural logic.
We leverage large language models to generate arithmetic expressions by generating questions about salient parts of a claim which are answered by executing functions on tables.
In a few-shot setting on FEVEROUS, we achieve an accuracy of 71.4, outperforming both fully neural and symbolic reasoning models by 3.4 points.
- Score: 11.002475880349452
- License:
- Abstract: Fact verification on tabular evidence incentivises the use of symbolic reasoning models where a logical form is constructed (e.g. a LISP-style program), providing greater verifiability than fully neural approaches. However, these systems typically rely on well-formed tables, restricting their use in many scenarios. An emerging symbolic reasoning paradigm for textual evidence focuses on natural logic inference, which constructs proofs by modelling set-theoretic relations between a claim and its evidence in natural language. This approach provides flexibility and transparency but is less compatible with tabular evidence since the relations do not extend to arithmetic functions. We propose a set-theoretic interpretation of numerals and arithmetic functions in the context of natural logic, enabling the integration of arithmetic expressions in deterministic proofs. We leverage large language models to generate arithmetic expressions by generating questions about salient parts of a claim which are answered by executing appropriate functions on tables. In a few-shot setting on FEVEROUS, we achieve an accuracy of 71.4, outperforming both fully neural and symbolic reasoning models by 3.4 points. When evaluated on TabFact without any further training, our method remains competitive with an accuracy lead of 0.5 points.
Related papers
- LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - QA-NatVer: Question Answering for Natural Logic-based Fact Verification [11.002475880349452]
We propose to use question answering to predict natural logic operators.
In a few-shot setting on FEVER, our approach outperforms the best baseline by $4.3$ accuracy points.
A human evaluation indicates that our approach produces more plausible with fewer erroneous natural logic operators than previous natural logic-based systems.
arXiv Detail & Related papers (2023-10-22T06:27:31Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Adaptive n-ary Activation Functions for Probabilistic Boolean Logic [2.294014185517203]
We show that we can learn arbitrary logic in a single layer using an activation function of matching or greater arity.
We represent belief tables using a basis that directly associates the number of nonzero parameters to the effective arity of the belief function.
This opens optimization approaches to reduce logical complexity by inducing parameter sparsity.
arXiv Detail & Related papers (2022-03-16T22:47:53Z) - Exploring Decomposition for Table-based Fact Verification [18.584226291619217]
We improve fact verification by decomposing complex statements into simpler subproblems.
Our proposed approach achieves the new state-of-the-art performance, an 82.7% accuracy, on the TabFact benchmark.
arXiv Detail & Related papers (2021-09-22T20:15:05Z) - A Relational Tsetlin Machine with Applications to Natural Language
Understanding [6.375447757249894]
We increase the computing power of TMs by proposing a first-order logic-based framework with Herbrand semantics.
The resulting TM is relational and can take advantage of logical structures appearing in natural language.
In closed-domain question-answering, the first-order representation produces 10x more compact KBs, along with an increase in answering accuracy from 94.83% to 99.48%.
arXiv Detail & Related papers (2021-02-22T12:40:37Z) - Learning as Abduction: Trainable Natural Logic Theorem Prover for
Natural Language Inference [0.4962199635155534]
We implement a learning method in a theorem prover for natural language.
We show that it improves the performance of the theorem prover on the SICK dataset by 1.4% while still maintaining high precision.
The obtained results are competitive with the state of the art among logic-based systems.
arXiv Detail & Related papers (2020-10-29T19:49:17Z) - Measuring Association Between Labels and Free-Text Rationales [60.58672852655487]
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
arXiv Detail & Related papers (2020-10-24T03:40:56Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.