Zero-Shot Fact Verification via Natural Logic and Large Language Models
- URL: http://arxiv.org/abs/2410.03341v1
- Date: Fri, 4 Oct 2024 11:57:32 GMT
- Title: Zero-Shot Fact Verification via Natural Logic and Large Language Models
- Authors: Marek Strong, Rami Aly, Andreas Vlachos,
- Abstract summary: We propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned large language models.
Our method achieves an average accuracy improvement of 8.96 points over the best-performing baseline.
Current systems trained on natural logic data do not generalize well to other domains.
- Score: 9.789552436902342
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The recent development of fact verification systems with natural logic has enhanced their explainability by aligning claims with evidence through set-theoretic operators, providing faithful justifications. Despite these advancements, such systems often rely on a large amount of training data annotated with natural logic. To address this issue, we propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned large language models. To comprehensively assess the zero-shot capabilities of our method and other fact verification systems, we evaluate all models on both artificial and real-world claims, including multilingual datasets. We also compare our method against other fact verification systems in two setups. First, in the zero-shot generalization setup, we demonstrate that our approach outperforms other systems that were not specifically trained on natural logic data, achieving an average accuracy improvement of 8.96 points over the best-performing baseline. Second, in the zero-shot transfer setup, we show that current systems trained on natural logic data do not generalize well to other domains, and our method outperforms these systems across all datasets with real-world claims.
Related papers
- Self-supervised Analogical Learning using Language Models [59.64260218737556]
We propose SAL, a self-supervised analogical learning framework.
SAL mimics the human analogy process and trains models to explicitly transfer high-quality symbolic solutions.
We show that the resulting models outperform base language models on a wide range of reasoning benchmarks.
arXiv Detail & Related papers (2025-02-03T02:31:26Z) - Inductive Biases for Zero-shot Systematic Generalization in Language-informed Reinforcement Learning [2.4392539322920763]
We provide architecture-level inductive biases for modularity and sparsity based on Neural Production Systems (NPS)
Our results in the BabyAI environment suggest that the proposed model's systematic generalization and sample efficiency are improved significantly compared to previous models.
arXiv Detail & Related papers (2025-01-25T16:36:59Z) - Face the Facts! Evaluating RAG-based Fact-checking Pipelines in Realistic Settings [14.355271969637139]
This work lifts several constraints of current state-of-the-art pipelines for automated fact-checking based on the Retrieval-Augmented Generation paradigm.
Our goal is to benchmark, under more realistic scenarios, RAG-based methods for the generation of verdicts.
arXiv Detail & Related papers (2024-12-19T18:57:11Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs [13.024338745226462]
Instead of operating directly on the claim and evidence sentences, we decompose them into semantic triples augmented using external knowledge graphs.
This allows it to generalize to adversarial datasets and domains that supervised models require specific training data for.
Our empirical results show that our approach outperforms previous zero-shot approaches on FEVER, FEVER-Symmetric, FEVER 2.0, and Climate-FEVER.
arXiv Detail & Related papers (2023-12-19T01:48:31Z) - QA-NatVer: Question Answering for Natural Logic-based Fact Verification [11.002475880349452]
We propose to use question answering to predict natural logic operators.
In a few-shot setting on FEVER, our approach outperforms the best baseline by $4.3$ accuracy points.
A human evaluation indicates that our approach produces more plausible with fewer erroneous natural logic operators than previous natural logic-based systems.
arXiv Detail & Related papers (2023-10-22T06:27:31Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction [49.15931834209624]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - SimSCOOD: Systematic Analysis of Out-of-Distribution Generalization in
Fine-tuned Source Code Models [58.78043959556283]
We study the behaviors of models under different fine-tuning methodologies, including full fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning methods.
Our analysis uncovers that LoRA fine-tuning consistently exhibits significantly better OOD generalization performance than full fine-tuning across various scenarios.
arXiv Detail & Related papers (2022-10-10T16:07:24Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.