Fact-checking information generated by a large language model can
decrease news discernment
- URL: http://arxiv.org/abs/2308.10800v3
- Date: Tue, 26 Dec 2023 18:20:20 GMT
- Title: Fact-checking information generated by a large language model can
decrease news discernment
- Authors: Matthew R. DeVerna, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer
- Abstract summary: We investigate the impact of fact-checking information generated by a popular large language model on belief in, and sharing intent of, political news.
We find that it does not significantly affect participants' ability to discern headline accuracy or share accurate news.
On the positive side, the AI fact-checking information increases sharing intents for correctly labeled true headlines.
- Score: 7.444681337745949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fact checking can be an effective strategy against misinformation, but its
implementation at scale is impeded by the overwhelming volume of information
online. Recent artificial intelligence (AI) language models have shown
impressive ability in fact-checking tasks, but how humans interact with
fact-checking information provided by these models is unclear. Here, we
investigate the impact of fact-checking information generated by a popular
large language model (LLM) on belief in, and sharing intent of, political news
in a preregistered randomized control experiment. Although the LLM performs
reasonably well in debunking false headlines, we find that it does not
significantly affect participants' ability to discern headline accuracy or
share accurate news. Subsequent analysis reveals that the AI fact-checker is
harmful in specific cases: it decreases beliefs in true headlines that it
mislabels as false and increases beliefs in false headlines that it is unsure
about. On the positive side, the AI fact-checking information increases sharing
intents for correctly labeled true headlines. When participants are given the
option to view LLM fact checks and choose to do so, they are significantly more
likely to share both true and false news but only more likely to believe false
news. Our findings highlight an important source of potential harm stemming
from AI applications and underscore the critical need for policies to prevent
or mitigate such unintended consequences.
Related papers
- Correcting misinformation on social media with a large language model [14.69780455372507]
High-quality and timely correction of misinformation that identifies and explains its (in)accuracies has been shown to effectively reduce false beliefs.
We propose and evaluate 13 dimensions of misinformation correction quality, ranging from the accuracy of identifications and factuality of explanations to the relevance and credibility of references.
The results demonstrate MUSE's ability to promptly write high-quality responses to potential misinformation on social media.
arXiv Detail & Related papers (2024-03-17T10:59:09Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - The Perils & Promises of Fact-checking with Large Language Models [55.869584426820715]
Large Language Models (LLMs) are increasingly trusted to write academic papers, lawsuits, and news articles.
We evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions.
Our results show the enhanced prowess of LLMs when equipped with contextual information.
While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy.
arXiv Detail & Related papers (2023-10-20T14:49:47Z) - News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT
3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking [0.0]
OpenAI's ChatGPT 3.5 and 4.0, Google's Bard(LaMDA), and Microsoft's Bing AI were evaluated.
The results showed a moderate proficiency across all models, with an average score of 65.25 out of 100.
OpenAI's GPT-4.0 stood out with a score of 71, suggesting an edge in newer LLMs' abilities to differentiate fact from deception.
arXiv Detail & Related papers (2023-06-18T04:30:29Z) - Uncertainty-Aware Reward-based Deep Reinforcement Learning for Intent
Analysis of Social Media Information [17.25399815431264]
Distinguishing the types of fake news spreaders based on their intent is critical.
We propose an intent classification framework that can best identify the correct intent of fake news.
arXiv Detail & Related papers (2023-02-19T00:54:33Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News [0.2741266294612776]
This work highlights a series of pitfalls that can influence human annotators when building false news datasets.
It also challenges the common rationale of AI that suggest users to read the full article before re-sharing.
arXiv Detail & Related papers (2021-10-22T12:02:16Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.