Fact-checking information from large language models can decrease headline discernment
- URL: http://arxiv.org/abs/2308.10800v4
- Date: Wed, 7 Aug 2024 19:49:53 GMT
- Title: Fact-checking information from large language models can decrease headline discernment
- Authors: Matthew R. DeVerna, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer,
- Abstract summary: We investigate the impact of fact-checking information generated by a popular large language model on belief in, and sharing intent of, political news headlines.
We find that this information does not significantly improve participants' ability to discern headline accuracy or share accurate news.
Our findings highlight an important source of potential harm stemming from AI applications.
- Score: 6.814801748069122
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent artificial intelligence (AI) language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment. Although the LLM accurately identifies most false headlines (90%), we find that this information does not significantly improve participants' ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhances discernment in both cases. Subsequent analysis reveals that the AI fact-checker is harmful in specific cases: it decreases beliefs in true headlines that it mislabels as false and increases beliefs in false headlines that it is unsure about. On the positive side, AI fact-checking information increases the sharing intent for correctly labeled true headlines. When participants are given the option to view LLM fact checks and choose to do so, they are significantly more likely to share both true and false news but only more likely to believe false headlines. Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Measuring Falseness in News Articles based on Concealment and Overstatement [5.383724566787227]
This research investigates the extent of misinformation in certain journalistic articles.
It aims to measure misinformation using two metrics (concealment and overstatement) to explore how information is interpreted as false.
arXiv Detail & Related papers (2024-07-31T20:45:56Z) - The Perils & Promises of Fact-checking with Large Language Models [55.869584426820715]
Large Language Models (LLMs) are increasingly trusted to write academic papers, lawsuits, and news articles.
We evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions.
Our results show the enhanced prowess of LLMs when equipped with contextual information.
While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy.
arXiv Detail & Related papers (2023-10-20T14:49:47Z) - News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT
3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking [0.0]
OpenAI's ChatGPT 3.5 and 4.0, Google's Bard(LaMDA), and Microsoft's Bing AI were evaluated.
The results showed a moderate proficiency across all models, with an average score of 65.25 out of 100.
OpenAI's GPT-4.0 stood out with a score of 71, suggesting an edge in newer LLMs' abilities to differentiate fact from deception.
arXiv Detail & Related papers (2023-06-18T04:30:29Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Deceptive AI Systems That Give Explanations Are Just as Convincing as
Honest AI Systems in Human-Machine Decision Making [38.71592583606443]
The ability to discern between true and false information is essential to making sound decisions.
With the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing.
arXiv Detail & Related papers (2022-09-23T20:09:03Z) - FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News [0.2741266294612776]
This work highlights a series of pitfalls that can influence human annotators when building false news datasets.
It also challenges the common rationale of AI that suggest users to read the full article before re-sharing.
arXiv Detail & Related papers (2021-10-22T12:02:16Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.