Automatic Fake News Detection: Are Models Learning to Reason?
- URL: http://arxiv.org/abs/2105.07698v1
- Date: Mon, 17 May 2021 09:34:03 GMT
- Title: Automatic Fake News Detection: Are Models Learning to Reason?
- Authors: Casper Hansen and Christian Hansen and Lucas Chaves Lima
- Abstract summary: We investigate the relationship and importance of both claim and evidence.
Surprisingly, we find on political fact checking datasets that most often the highest effectiveness is obtained by utilizing only the evidence.
This highlights an important problem in what constitutes evidence in existing approaches for automatic fake news detection.
- Score: 9.143551270841858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most fact checking models for automatic fake news detection are based on
reasoning: given a claim with associated evidence, the models aim to estimate
the claim veracity based on the supporting or refuting content within the
evidence. When these models perform well, it is generally assumed to be due to
the models having learned to reason over the evidence with regards to the
claim. In this paper, we investigate this assumption of reasoning, by exploring
the relationship and importance of both claim and evidence. Surprisingly, we
find on political fact checking datasets that most often the highest
effectiveness is obtained by utilizing only the evidence, as the impact of
including the claim is either negligible or harmful to the effectiveness. This
highlights an important problem in what constitutes evidence in existing
approaches for automatic fake news detection.
Related papers
- Grounding Fallacies Misrepresenting Scientific Publications in Evidence [84.32990746227385]
We introduce MissciPlus, an extension of the fallacy detection dataset Missci.
MissciPlus builds on Missci by grounding the applied fallacies in real-world passages from misrepresented studies.
MissciPlus is the first logical fallacy dataset which pairs the real-world misrepresented evidence with incorrect claims.
arXiv Detail & Related papers (2024-08-23T03:16:26Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - Implicit Temporal Reasoning for Evidence-Based Fact-Checking [14.015789447347466]
Our study demonstrates that time positively influences the claim verification process of evidence-based fact-checking.
Our findings show that the presence of temporal information and the manner in which timelines are constructed greatly influence how fact-checking models determine the relevance and supporting or refuting character of evidence documents.
arXiv Detail & Related papers (2023-02-24T10:48:03Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Automatic Fake News Detection: Are current models "fact-checking" or
"gut-checking"? [0.0]
Automatic fake news detection models are ostensibly based on logic.
It has been shown that these same results, or better, can be achieved without considering the claim at all.
This implies that other signals are contained within the examined evidence.
arXiv Detail & Related papers (2022-04-14T21:05:37Z) - Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection [20.282527436527765]
We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
arXiv Detail & Related papers (2022-01-18T11:28:36Z) - Robust Information Retrieval for False Claims with Distracting Entities
In Fact Extraction and Verification [2.624734563929267]
This paper shows that, compared with true claims, false claims more frequently contain irrelevant entities which can distract evidence retrieval model.
A BERT-based retrieval model made more mistakes in retrieving refuting evidence for false claims than supporting evidence for true claims.
arXiv Detail & Related papers (2021-12-10T17:11:50Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.