Missci: Reconstructing Fallacies in Misrepresented Science
- URL: http://arxiv.org/abs/2406.03181v1
- Date: Wed, 5 Jun 2024 12:11:10 GMT
- Title: Missci: Reconstructing Fallacies in Misrepresented Science
- Authors: Max Glockner, Yufang Hou, Preslav Nakov, Iryna Gurevych,
- Abstract summary: Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
- Score: 84.32990746227385
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Health-related misinformation on social networks can lead to poor decision-making and real-world dangers. Such misinformation often misrepresents scientific publications and cites them as "proof" to gain perceived credibility. To effectively counter such claims automatically, a system must explain how the claim was falsely derived from the cited publication. Current methods for automated fact-checking or fallacy detection neglect to assess the (mis)used evidence in relation to misinformation claims, which is required to detect the mismatch between them. To address this gap, we introduce Missci, a novel argumentation theoretical model for fallacious reasoning together with a new dataset for real-world misinformation detection that misrepresents biomedical publications. Unlike previous fallacy detection datasets, Missci (i) focuses on implicit fallacies between the relevant content of the cited publication and the inaccurate claim, and (ii) requires models to verbalize the fallacious reasoning in addition to classifying it. We present Missci as a dataset to test the critical reasoning abilities of large language models (LLMs), that are required to reconstruct real-world fallacious arguments, in a zero-shot setting. We evaluate two representative LLMs and the impact of different levels of detail about the fallacy classes provided to the LLM via prompts. Our experiments and human evaluation show promising results for GPT 4, while also demonstrating the difficulty of this task.
Related papers
- Grounding Fallacies Misrepresenting Scientific Publications in Evidence [84.32990746227385]
We introduce MissciPlus, an extension of the fallacy detection dataset Missci.
MissciPlus builds on Missci by grounding the applied fallacies in real-world passages from misrepresented studies.
MissciPlus is the first logical fallacy dataset which pairs the real-world misrepresented evidence with incorrect claims.
arXiv Detail & Related papers (2024-08-23T03:16:26Z) - Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition [49.38757847011105]
computational fallacy recognition faces challenges due to diverse genres, domains, and types of fallacies found in datasets.
We aim to enhance existing models for fallacy recognition by incorporating additional context and by leveraging large language models to generate synthetic data.
Our evaluation results demonstrate consistent improvements across fallacy types, datasets, and generators.
arXiv Detail & Related papers (2023-11-16T04:17:47Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Reinforcement Learning-based Counter-Misinformation Response Generation:
A Case Study of COVID-19 Vaccine Misinformation [19.245814221211415]
Non-expert ordinary users act as eyes-on-the-ground who proactively counter misinformation.
In this work, we create two novel datasets of misinformation and counter-misinformation response pairs.
We propose MisinfoCorrect, a reinforcement learning-based framework that learns to generate counter-misinformation responses.
arXiv Detail & Related papers (2023-03-11T15:55:01Z) - Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection [20.282527436527765]
We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
arXiv Detail & Related papers (2022-01-18T11:28:36Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.