Can Large Language Models Understand Content and Propagation for
Misinformation Detection: An Empirical Study
- URL: http://arxiv.org/abs/2311.12699v1
- Date: Tue, 21 Nov 2023 16:03:51 GMT
- Title: Can Large Language Models Understand Content and Propagation for
Misinformation Detection: An Empirical Study
- Authors: Mengyang Chen, Lingwei Wei, Han Cao, Wei Zhou, Songlin Hu
- Abstract summary: Large Language Models (LLMs) have garnered significant attention for their powerful ability in natural language understanding and reasoning.
We present a comprehensive empirical study to explore the performance of LLMs on misinformation detection tasks.
- Score: 26.023148371263012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have garnered significant attention for their
powerful ability in natural language understanding and reasoning. In this
paper, we present a comprehensive empirical study to explore the performance of
LLMs on misinformation detection tasks. This study stands as the pioneering
investigation into the understanding capabilities of multiple LLMs regarding
both content and propagation across social media platforms. Our empirical
studies on five misinformation detection datasets show that LLMs with diverse
prompts achieve comparable performance in text-based misinformation detection
but exhibit notably constrained capabilities in comprehending propagation
structure compared to existing models in propagation-based misinformation
detection. Besides, we further design four instruction-tuned strategies to
enhance LLMs for both content and propagation-based misinformation detection.
These strategies boost LLMs to actively learn effective features from multiple
instances or hard instances, and eliminate irrelevant propagation structures,
thereby achieving better detection performance. Extensive experiments further
demonstrate LLMs would play a better capacity in content and propagation
structure under these proposed strategies and achieve promising detection
performance. These findings highlight the potential ability of LLMs to detect
misinformation.
Related papers
- Beyond Binary: Towards Fine-Grained LLM-Generated Text Detection via Role Recognition and Involvement Measurement [51.601916604301685]
Large language models (LLMs) generate content that can undermine trust in online discourse.
Current methods often focus on binary classification, failing to address the complexities of real-world scenarios like human-LLM collaboration.
To move beyond binary classification and address these challenges, we propose a new paradigm for detecting LLM-generated content.
arXiv Detail & Related papers (2024-10-18T08:14:10Z) - Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness [39.57155321515097]
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language processing tasks.
It remains unclear whether LLMs exhibit robustness in learning on graphs.
arXiv Detail & Related papers (2024-07-16T09:05:31Z) - MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation [15.343028838291078]
We propose MMIDR, a framework designed to teach LLMs in providing fluent and high-quality textual explanations for their decision-making process of multimodal misinformation.
To convert multimodal misinformation into an appropriate instruction-following format, we present a data augmentation perspective and pipeline.
Furthermore, we design an efficient knowledge distillation approach to distill the capability of proprietary LLMs in explaining multimodal misinformation into open-source LLMs.
arXiv Detail & Related papers (2024-03-21T06:47:28Z) - LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation [58.524237916836164]
We propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation.
Our method improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and Fakeddit datasets respectively.
arXiv Detail & Related papers (2024-02-19T08:32:27Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Are Large Language Models Good Fact Checkers: A Preliminary Study [26.023148371263012]
Large Language Models (LLMs) have drawn significant attention due to their outstanding reasoning capabilities and extensive knowledge repository.
This study aims to comprehensively evaluate various LLMs in tackling specific fact-checking subtasks.
arXiv Detail & Related papers (2023-11-29T05:04:52Z) - Exploring the Cognitive Knowledge Structure of Large Language Models: An
Educational Diagnostic Assessment Approach [50.125704610228254]
Large Language Models (LLMs) have not only exhibited exceptional performance across various tasks, but also demonstrated sparks of intelligence.
Recent studies have focused on assessing their capabilities on human exams and revealed their impressive competence in different domains.
We conduct an evaluation using MoocRadar, a meticulously annotated human test dataset based on Bloom taxonomy.
arXiv Detail & Related papers (2023-10-12T09:55:45Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - On the Risk of Misinformation Pollution with Large Language Models [127.1107824751703]
We investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation.
Our study reveals that LLMs can act as effective misinformation generators, leading to a significant degradation in the performance of Open-Domain Question Answering (ODQA) systems.
arXiv Detail & Related papers (2023-05-23T04:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.