Can Large Language Models Detect Rumors on Social Media?
- URL: http://arxiv.org/abs/2402.03916v2
- Date: Thu, 8 Feb 2024 16:09:36 GMT
- Title: Can Large Language Models Detect Rumors on Social Media?
- Authors: Qiang Liu, Xiang Tao, Junfei Wu, Shu Wu, Liang Wang
- Abstract summary: We investigate to use Large Language Models (LLMs) for rumor detection on social media.
We propose an LLM-empowered Rumor Detection (LeRuD) approach, in which we design prompts to teach LLMs to reason over important clues in news and comments.
LeRuD outperforms several state-of-the-art rumor detection models by 3.2% to 7.7%.
- Score: 21.678652268122296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we investigate to use Large Language Models (LLMs) for rumor
detection on social media. However, it is challenging for LLMs to reason over
the entire propagation information on social media, which contains news
contents and numerous comments, due to LLMs may not concentrate on key clues in
the complex propagation information, and have trouble in reasoning when facing
massive and redundant information. Accordingly, we propose an LLM-empowered
Rumor Detection (LeRuD) approach, in which we design prompts to teach LLMs to
reason over important clues in news and comments, and divide the entire
propagation information into a Chain-of-Propagation for reducing LLMs' burden.
We conduct extensive experiments on the Twitter and Weibo datasets, and LeRuD
outperforms several state-of-the-art rumor detection models by 3.2% to 7.7%.
Meanwhile, by applying LLMs, LeRuD requires no data for training, and thus
shows more promising rumor detection ability in few-shot or zero-shot
scenarios.
Related papers
- How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation [24.355564722047244]
Large Language Models (LLMs) are widely deployed in diverse scenarios.
The extent to which they could tacitly spread misinformation emerges as a critical safety concern.
We curated ECHOMIST, the first benchmark for implicit misinformation.
arXiv Detail & Related papers (2025-03-12T17:59:18Z) - An Empirical Analysis of LLMs for Countering Misinformation [4.832131829290864]
Large Language Models (LLMs) can amplify online misinformation, but show promise in tackling misinformation.
We empirically study the capabilities of three LLMs -- ChatGPT, Gemini, and Claude -- in countering political misinformation.
Our findings suggest that models struggle to ground their responses in real news sources, and tend to prefer citing left-leaning sources.
arXiv Detail & Related papers (2025-02-28T07:12:03Z) - Towards Robust Evaluation of Unlearning in LLMs via Data Transformations [17.927224387698903]
Large Language Models (LLMs) have shown to be a great success in a wide range of applications ranging from regular NLP-based use cases to AI agents.
In recent times research in the area of Machine Unlearning (MUL) has become active.
Main idea is to force LLMs to forget (unlearn) certain information (e.g., PII) without suffering from performance loss on regular tasks.
arXiv Detail & Related papers (2024-11-23T07:20:36Z) - NewsInterview: a Dataset and a Playground to Evaluate LLMs' Ground Gap via Informational Interviews [65.35458530702442]
We focus on journalistic interviews, a domain rich in grounding communication and abundant in data.
We curate a dataset of 40,000 two-person informational interviews from NPR and CNN.
LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions.
arXiv Detail & Related papers (2024-11-21T01:37:38Z) - From Deception to Detection: The Dual Roles of Large Language Models in Fake News [0.20482269513546458]
Fake news poses a significant threat to the integrity of information ecosystems and public trust.
The advent of Large Language Models (LLMs) holds considerable promise for transforming the battle against fake news.
This paper explores the capability of various LLMs in effectively combating fake news.
arXiv Detail & Related papers (2024-09-25T22:57:29Z) - Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data [9.31120925026271]
We study inductive out-of-context reasoning (OOCR) in which LLMs infer latent information from evidence distributed across training documents.
In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities.
While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures.
arXiv Detail & Related papers (2024-06-20T17:55:04Z) - LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation [58.524237916836164]
We propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation.
Our method improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and Fakeddit datasets respectively.
arXiv Detail & Related papers (2024-02-19T08:32:27Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Disinformation Capabilities of Large Language Models [0.564232659769944]
This paper presents a study of the disinformation capabilities of the current generation of large language models (LLMs)
We evaluated the capabilities of 10 LLMs using 20 disinformation narratives.
We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.
arXiv Detail & Related papers (2023-11-15T10:25:30Z) - Bad Actor, Good Advisor: Exploring the Role of Large Language Models in
Fake News Detection [22.658378054986624]
Large language models (LLMs) have shown remarkable performance in various tasks.
LLMs provide desirable multi-perspective rationales but still underperform the basic SLM, fine-tuned BERT.
We propose that current LLMs may not substitute fine-tuned SLMs in fake news detection but can be a good advisor for SLMs.
arXiv Detail & Related papers (2023-09-21T16:47:30Z) - Are Large Language Models Really Robust to Word-Level Perturbations? [68.60618778027694]
We propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools.
Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions.
Our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage.
arXiv Detail & Related papers (2023-09-20T09:23:46Z) - On the Risk of Misinformation Pollution with Large Language Models [127.1107824751703]
We investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation.
Our study reveals that LLMs can act as effective misinformation generators, leading to a significant degradation in the performance of Open-Domain Question Answering (ODQA) systems.
arXiv Detail & Related papers (2023-05-23T04:10:26Z) - Can Large Language Models Transform Computational Social Science? [79.62471267510963]
Large Language Models (LLMs) are capable of performing many language processing tasks zero-shot (without training data)
This work provides a road map for using LLMs as Computational Social Science tools.
arXiv Detail & Related papers (2023-04-12T17:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.