Combating Misinformation in the Age of LLMs: Opportunities and
Challenges
- URL: http://arxiv.org/abs/2311.05656v1
- Date: Thu, 9 Nov 2023 00:05:27 GMT
- Title: Combating Misinformation in the Age of LLMs: Opportunities and
Challenges
- Authors: Canyu Chen, Kai Shu
- Abstract summary: The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of combating misinformation.
On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities.
On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale.
- Score: 21.712051537924136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation such as fake news and rumors is a serious threat on
information ecosystems and public trust. The emergence of Large Language Models
(LLMs) has great potential to reshape the landscape of combating
misinformation. Generally, LLMs can be a double-edged sword in the fight. On
the one hand, LLMs bring promising opportunities for combating misinformation
due to their profound world knowledge and strong reasoning abilities. Thus, one
emergent question is: how to utilize LLMs to combat misinformation? On the
other hand, the critical challenge is that LLMs can be easily leveraged to
generate deceptive misinformation at scale. Then, another important question
is: how to combat LLM-generated misinformation? In this paper, we first
systematically review the history of combating misinformation before the advent
of LLMs. Then we illustrate the current efforts and present an outlook for
these two fundamental questions respectively. The goal of this survey paper is
to facilitate the progress of utilizing LLMs for fighting misinformation and
call for interdisciplinary efforts from different stakeholders for combating
LLM-generated misinformation.
Related papers
- From Deception to Detection: The Dual Roles of Large Language Models in Fake News [0.20482269513546458]
Fake news poses a significant threat to the integrity of information ecosystems and public trust.
The advent of Large Language Models (LLMs) holds considerable promise for transforming the battle against fake news.
This paper explores the capability of various LLMs in effectively combating fake news.
arXiv Detail & Related papers (2024-09-25T22:57:29Z) - LLM Echo Chamber: personalized and automated disinformation [0.0]
Large Language Models can spread persuasive, humanlike misinformation at scale, which could influence public opinion.
This study examines these risks, focusing on LLMs ability to propagate misinformation as factual.
To investigate this, we built the LLM Echo Chamber, a controlled digital environment simulating social media chatrooms, where misinformation often spreads.
This setup, evaluated by GPT4 for persuasiveness and harmfulness, sheds light on the ethical concerns surrounding LLMs and emphasizes the need for stronger safeguards against misinformation.
arXiv Detail & Related papers (2024-09-24T17:04:12Z) - Can Editing LLMs Inject Harm? [122.83469484328465]
We propose to reformulate knowledge editing as a new type of safety threat for Large Language Models.
For the risk of misinformation injection, we first categorize it into commonsense misinformation injection and long-tail misinformation injection.
For the risk of bias injection, we discover that not only can biased sentences be injected into LLMs with high effectiveness, but also one single biased sentence injection can cause a bias increase.
arXiv Detail & Related papers (2024-07-29T17:58:06Z) - LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation [58.524237916836164]
We propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation.
Our method improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and Fakeddit datasets respectively.
arXiv Detail & Related papers (2024-02-19T08:32:27Z) - When Do LLMs Need Retrieval Augmentation? Mitigating LLMs' Overconfidence Helps Retrieval Augmentation [66.01754585188739]
Large Language Models (LLMs) have been found to have difficulty knowing they do not possess certain knowledge.
Retrieval Augmentation (RA) has been extensively studied to mitigate LLMs' hallucinations.
We propose several methods to enhance LLMs' perception of knowledge boundaries and show that they are effective in reducing overconfidence.
arXiv Detail & Related papers (2024-02-18T04:57:19Z) - Disinformation Capabilities of Large Language Models [0.564232659769944]
This paper presents a study of the disinformation capabilities of the current generation of large language models (LLMs)
We evaluated the capabilities of 10 LLMs using 20 disinformation narratives.
We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.
arXiv Detail & Related papers (2023-11-15T10:25:30Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - Avalon's Game of Thoughts: Battle Against Deception through Recursive
Contemplation [80.126717170151]
This study utilizes the intricate Avalon game as a testbed to explore LLMs' potential in deceptive environments.
We introduce a novel framework, Recursive Contemplation (ReCon), to enhance LLMs' ability to identify and counteract deceptive information.
arXiv Detail & Related papers (2023-10-02T16:27:36Z) - Can LLM-Generated Misinformation Be Detected? [18.378744138365537]
Large Language Models (LLMs) can be exploited to generate misinformation.
A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation?
arXiv Detail & Related papers (2023-09-25T00:45:07Z) - On the Risk of Misinformation Pollution with Large Language Models [127.1107824751703]
We investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation.
Our study reveals that LLMs can act as effective misinformation generators, leading to a significant degradation in the performance of Open-Domain Question Answering (ODQA) systems.
arXiv Detail & Related papers (2023-05-23T04:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.