Unsupervised Pretraining for Fact Verification by Language Model
Distillation
- URL: http://arxiv.org/abs/2309.16540v3
- Date: Wed, 6 Mar 2024 20:12:01 GMT
- Title: Unsupervised Pretraining for Fact Verification by Language Model
Distillation
- Authors: Adri\'an Bazaga and Pietro Li\`o and Gos Micklem
- Abstract summary: We propose SFAVEL (Self-supervised Fact Verification via Language Model Distillation), a novel unsupervised pretraining framework.
It distils self-supervised features into high-quality claim-fact alignments without the need for annotations.
This is enabled by a novel contrastive loss function that encourages features to attain high-quality claim and evidence alignments.
- Score: 4.504050940874427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fact verification aims to verify a claim using evidence from a trustworthy
knowledge base. To address this challenge, algorithms must produce features for
every claim that are both semantically meaningful, and compact enough to find a
semantic alignment with the source information. In contrast to previous work,
which tackled the alignment problem by learning over annotated corpora of
claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact
Verification via Language Model Distillation), a novel unsupervised pretraining
framework that leverages pre-trained language models to distil self-supervised
features into high-quality claim-fact alignments without the need for
annotations. This is enabled by a novel contrastive loss function that
encourages features to attain high-quality claim and evidence alignments whilst
preserving the semantic relationships across the corpora. Notably, we present
results that achieve a new state-of-the-art on FB15k-237 (+5.3% Hits@1) and
FEVER (+8% accuracy) with linear evaluation.
Related papers
- FactLens: Benchmarking Fine-Grained Fact Verification [6.814173254027381]
We advocate for a shift toward fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification.
We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality.
Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
arXiv Detail & Related papers (2024-11-08T21:26:57Z) - Localizing Factual Inconsistencies in Attributable Text Generation [91.981439746404]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.
We first demonstrate the effectiveness of the QASemConsistency methodology for human annotation.
We then implement several methods for automatically detecting localized factual inconsistencies.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators [38.523194864405326]
AFaCTA is a novel framework that assists in the annotation of factual claims.
AFaCTA calibrates its annotation confidence with consistency along three predefined reasoning paths.
Our analyses also result in PoliClaim, a comprehensive claim detection dataset spanning diverse political topics.
arXiv Detail & Related papers (2024-02-16T20:59:57Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - SAGA: Summarization-Guided Assert Statement Generation [34.51502565985728]
This paper presents a novel summarization-guided approach for automatically generating assert statements.
We leverage a pre-trained language model as the reference architecture and fine-tune it on the task of assert statement generation.
arXiv Detail & Related papers (2023-05-24T07:03:21Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Pre-trained Language Models as Re-Annotators [3.193067591317475]
We investigate how to acquire semantic sensitive annotation representations from Pre-trained Language Models.
We fine-tune the Pre-trained Language Models based with cross-validation for annotation correction.
We study the re-annotation in relation extraction and create a new manually revised dataset, Re-DocRED.
arXiv Detail & Related papers (2022-05-11T09:28:23Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.