Automatic Fake News Detection: Are current models "fact-checking" or
"gut-checking"?
- URL: http://arxiv.org/abs/2204.07229v1
- Date: Thu, 14 Apr 2022 21:05:37 GMT
- Title: Automatic Fake News Detection: Are current models "fact-checking" or
"gut-checking"?
- Authors: Ian Kelk, Benjamin Basseri, Wee Yi Lee, Richard Qiu, Chris Tanner
- Abstract summary: Automatic fake news detection models are ostensibly based on logic.
It has been shown that these same results, or better, can be achieved without considering the claim at all.
This implies that other signals are contained within the examined evidence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic fake news detection models are ostensibly based on logic, where the
truth of a claim made in a headline can be determined by supporting or refuting
evidence found in a resulting web query. These models are believed to be
reasoning in some way; however, it has been shown that these same results, or
better, can be achieved without considering the claim at all -- only the
evidence. This implies that other signals are contained within the examined
evidence, and could be based on manipulable factors such as emotion, sentiment,
or part-of-speech (POS) frequencies, which are vulnerable to adversarial
inputs. We neutralize some of these signals through multiple forms of both
neural and non-neural pre-processing and style transfer, and find that this
flattening of extraneous indicators can induce the models to actually require
both claims and evidence to perform well. We conclude with the construction of
a model using emotion vectors built off a lexicon and passed through an
"emotional attention" mechanism to appropriately weight certain emotions. We
provide quantifiable results that prove our hypothesis that manipulable
features are being used for fact-checking.
Related papers
- Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection [20.282527436527765]
We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
arXiv Detail & Related papers (2022-01-18T11:28:36Z) - Is My Model Using The Right Evidence? Systematic Probes for Examining
Evidence-Based Tabular Reasoning [26.168211982441875]
Neural models routinely report state-of-the-art performance across NLP tasks involving reasoning.
Our experiments demonstrate that a BERT-based model representative of today's state-of-the-art fails to properly reason on the following counts.
arXiv Detail & Related papers (2021-08-02T01:14:19Z) - Automatic Fake News Detection: Are Models Learning to Reason? [9.143551270841858]
We investigate the relationship and importance of both claim and evidence.
Surprisingly, we find on political fact checking datasets that most often the highest effectiveness is obtained by utilizing only the evidence.
This highlights an important problem in what constitutes evidence in existing approaches for automatic fake news detection.
arXiv Detail & Related papers (2021-05-17T09:34:03Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.