Detecting Fallacies in Climate Misinformation: A Technocognitive Approach to Identifying Misleading Argumentation
- URL: http://arxiv.org/abs/2405.08254v1
- Date: Tue, 14 May 2024 01:01:44 GMT
- Title: Detecting Fallacies in Climate Misinformation: A Technocognitive Approach to Identifying Misleading Argumentation
- Authors: Francisco Zanartu, John Cook, Markus Wagner, Julian Garcia,
- Abstract summary: We develop a dataset mapping different types of climate misinformation to reasoning fallacies.
This dataset is used to train a model to detect fallacies in climate misinformation.
- Score: 0.6496783221842394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation about climate change is a complex societal issue requiring holistic, interdisciplinary solutions at the intersection between technology and psychology. One proposed solution is a "technocognitive" approach, involving the synthesis of psychological and computer science research. Psychological research has identified that interventions in response to misinformation require both fact-based (e.g., factual explanations) and technique-based (e.g., explanations of misleading techniques) content. However, little progress has been made on documenting and detecting fallacies in climate misinformation. In this study, we apply a previously developed critical thinking methodology for deconstructing climate misinformation, in order to develop a dataset mapping different types of climate misinformation to reasoning fallacies. This dataset is used to train a model to detect fallacies in climate misinformation. Our study shows F1 scores that are 2.5 to 3.5 better than previous works. The fallacies that are easiest to detect include fake experts and anecdotal arguments, while fallacies that require background knowledge, such as oversimplification, misrepresentation, and slothful induction, are relatively more difficult to detect. This research lays the groundwork for development of solutions where automatically detected climate misinformation can be countered with generative technique-based corrections.
Related papers
- Generative Debunking of Climate Misinformation [9.274656542624662]
This study documents the development of large language models that accept as input a climate myth and produce a debunking.
We combine open (Mixtral, Palm2) and proprietary (GPT-4) LLMs with prompting strategies of varying complexity.
Experiments reveal promising performance of GPT-4 and Mixtral if combined with structured prompts.
We release a dataset of high-quality truth-sandwich debunkings, source code and a demo of the debunking system.
arXiv Detail & Related papers (2024-07-08T04:21:58Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Unlearning Climate Misinformation in Large Language Models [17.95497650321137]
Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity.
This paper investigates factual accuracy in large language models (LLMs) regarding climate information.
arXiv Detail & Related papers (2024-05-29T23:11:53Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - Modeling Information Change in Science Communication with Semantically
Matched Paraphrases [50.67030449927206]
SPICED is the first paraphrase dataset of scientific findings annotated for degree of information change.
SPICED contains 6,000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers.
Models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims.
arXiv Detail & Related papers (2022-10-24T07:44:38Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - Automatic Claim Review for Climate Science via Explanation Generation [33.44370581827454]
Scientists and experts have been trying to address it by providing manually written feedback for these claims.
We deploy the approach used in open domain question answering of a fusion in decoder augmented with retrieved supporting passages from an external knowledge.
We experiment with different knowledge sources, retrievers, retriever depths and demonstrate that even a small number of high quality manually written explanations can help us in generating good explanations.
arXiv Detail & Related papers (2021-07-30T16:37:45Z) - ClimaText: A Dataset for Climate Change Topic Detection [2.9767565026354186]
We introduce textscClimaText, a dataset for sentence-based climate change topic detection.
We find that popular keyword-based models are not adequate for such a complex and evolving task.
Our analysis reveals a great potential for improvement in several directions.
arXiv Detail & Related papers (2020-12-01T13:42:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.