Implicit Temporal Reasoning for Evidence-Based Fact-Checking
- URL: http://arxiv.org/abs/2302.12569v1
- Date: Fri, 24 Feb 2023 10:48:03 GMT
- Title: Implicit Temporal Reasoning for Evidence-Based Fact-Checking
- Authors: Liesbeth Allein, Marlon Saelens, Ruben Cartuyvels, Marie-Francine
Moens
- Abstract summary: Our study demonstrates that time positively influences the claim verification process of evidence-based fact-checking.
Our findings show that the presence of temporal information and the manner in which timelines are constructed greatly influence how fact-checking models determine the relevance and supporting or refuting character of evidence documents.
- Score: 14.015789447347466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leveraging contextual knowledge has become standard practice in automated
claim verification, yet the impact of temporal reasoning has been largely
overlooked. Our study demonstrates that time positively influences the claim
verification process of evidence-based fact-checking. The temporal aspects and
relations between claims and evidence are first established through grounding
on shared timelines, which are constructed using publication dates and time
expressions extracted from their text. Temporal information is then provided to
RNN-based and Transformer-based classifiers before or after claim and evidence
encoding. Our time-aware fact-checking models surpass base models by up to 9%
Micro F1 (64.17%) and 15% Macro F1 (47.43%) on the MultiFC dataset. They also
outperform prior methods that explicitly model temporal relations between
evidence. Our findings show that the presence of temporal information and the
manner in which timelines are constructed greatly influence how fact-checking
models determine the relevance and supporting or refuting character of evidence
documents.
Related papers
- ChronoFact: Timeline-based Temporal Fact Verification [15.698391632115856]
We propose an end-to-end solution for temporal fact verification that considers the temporal information in claims to obtain relevant evidence sentences.
We learn time-sensitive representations that encapsulate not only the semantic relationships among the events, but also their chronological proximity.
Experiment results demonstrate that the proposed approach significantly enhances the accuracy of temporal claim verification.
arXiv Detail & Related papers (2024-10-19T03:44:19Z) - Robust Claim Verification Through Fact Detection [17.29665711917281]
Our novel approach, FactDetect, leverages Large Language Models (LLMs) to generate concise factual statements from evidence.
The generated facts are then combined with the claim and evidence.
Our method demonstrates competitive results in the supervised claim verification model by 15% on the F1 score.
arXiv Detail & Related papers (2024-07-25T20:03:43Z) - Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables [22.18384189336634]
HeterFC is a word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information.
We perform information propagation via a relational graph neural network, interactions between claims and evidence.
We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval.
arXiv Detail & Related papers (2024-02-20T14:10:40Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification [19.130541561303293]
We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
arXiv Detail & Related papers (2021-06-02T14:33:12Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Automatic Fake News Detection: Are Models Learning to Reason? [9.143551270841858]
We investigate the relationship and importance of both claim and evidence.
Surprisingly, we find on political fact checking datasets that most often the highest effectiveness is obtained by utilizing only the evidence.
This highlights an important problem in what constitutes evidence in existing approaches for automatic fake news detection.
arXiv Detail & Related papers (2021-05-17T09:34:03Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - Correct block-design experiments mitigate temporal correlation bias in
EEG classification [68.85562949901077]
We show that the main claim in [1] is drastically overstated and their other analyses are seriously flawed by wrong methodological choices.
We investigate the influence of EEG temporal correlation on classification accuracy by testing the same models in two additional experimental settings.
arXiv Detail & Related papers (2020-11-25T22:25:21Z) - Time-Aware Evidence Ranking for Fact-Checking [56.247512670779045]
We investigate the hypothesis that the timestamp of a Web page is crucial to how it should be ranked for a given claim.
Our study reveals that time-aware evidence ranking not only surpasses relevance assumptions based purely on semantic similarity or position in a search results list, but also improves veracity predictions of time-sensitive claims in particular.
arXiv Detail & Related papers (2020-09-10T13:39:49Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.