LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection
- URL: http://arxiv.org/abs/2409.01787v1
- Date: Tue, 3 Sep 2024 11:06:45 GMT
- Title: LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection
- Authors: Yifeng Wang, Zhouhong Gu, Siwei Zhang, Suhang Zheng, Tao Wang, Tianyu Li, Hongwei Feng, Yanghua Xiao,
- Abstract summary: Large Language Models (LLMs) are known for their powerful natural language understanding and explanation generation abilities.
We propose LLM-GAN, a novel framework that utilizes prompting mechanisms to enable an LLM to become Generator and Detector.
Our results demonstrate LLM-GAN's effectiveness in both prediction performance and explanation quality.
- Score: 34.984605500444324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable fake news detection predicts the authenticity of news items with annotated explanations. Today, Large Language Models (LLMs) are known for their powerful natural language understanding and explanation generation abilities. However, presenting LLMs for explainable fake news detection remains two main challenges. Firstly, fake news appears reasonable and could easily mislead LLMs, leaving them unable to understand the complex news-faking process. Secondly, utilizing LLMs for this task would generate both correct and incorrect explanations, which necessitates abundant labor in the loop. In this paper, we propose LLM-GAN, a novel framework that utilizes prompting mechanisms to enable an LLM to become Generator and Detector and for realistic fake news generation and detection. Our results demonstrate LLM-GAN's effectiveness in both prediction performance and explanation quality. We further showcase the integration of LLM-GAN to a cloud-native AI platform to provide better fake news detection service in the cloud.
Related papers
- From Deception to Detection: The Dual Roles of Large Language Models in Fake News [0.20482269513546458]
Fake news poses a significant threat to the integrity of information ecosystems and public trust.
The advent of Large Language Models (LLMs) holds considerable promise for transforming the battle against fake news.
This paper explores the capability of various LLMs in effectively combating fake news.
arXiv Detail & Related papers (2024-09-25T22:57:29Z) - MegaFake: A Theory-Driven Dataset of Fake News Generated by Large Language Models [18.708519905776562]
We analyze the creation of fake news from a social psychology perspective.
We develop a comprehensive LLM-based theoretical framework, LLM-Fake Theory.
We conduct comprehensive analyses to evaluate our MegaFake dataset.
arXiv Detail & Related papers (2024-08-19T13:27:07Z) - Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News [0.38233569758620056]
This paper aims to elucidate simple markers that help individuals distinguish between articles penned by humans and those created by LLMs.
We then devise a metric named Entropy-Shift Authorship Signature (ESAS) based on the information theory and entropy principles.
The proposed ESAS ranks terms or entities, like POS tagging, within news articles based on their relevance in discerning article authorship.
arXiv Detail & Related papers (2024-06-20T06:02:04Z) - FKA-Owl: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs [48.32113486904612]
We propose FKA-Owl, a framework that leverages forgery-specific knowledge to augment Large Vision-Language Models (LVLMs)
Experiments on the public benchmark demonstrate that FKA-Owl achieves superior cross-domain performance compared to previous methods.
arXiv Detail & Related papers (2024-03-04T12:35:09Z) - DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection [50.805599761583444]
Large language models are limited by challenges in factuality and hallucinations to be directly employed off-the-shelf for judging the veracity of news articles.
We propose Dell that identifies three key stages in misinformation detection where LLMs could be incorporated as part of the pipeline.
arXiv Detail & Related papers (2024-02-16T03:24:56Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Bad Actor, Good Advisor: Exploring the Role of Large Language Models in
Fake News Detection [22.658378054986624]
Large language models (LLMs) have shown remarkable performance in various tasks.
LLMs provide desirable multi-perspective rationales but still underperform the basic SLM, fine-tuned BERT.
We propose that current LLMs may not substitute fine-tuned SLMs in fake news detection but can be a good advisor for SLMs.
arXiv Detail & Related papers (2023-09-21T16:47:30Z) - Fake News Detectors are Biased against Texts Generated by Large Language
Models [39.36284616311687]
The spread of fake news has emerged as a critical challenge, undermining trust and posing threats to society.
We present a novel paradigm to evaluate fake news detectors in scenarios involving both human-written and LLM-generated misinformation.
arXiv Detail & Related papers (2023-09-15T18:04:40Z) - Red Teaming Language Model Detectors with Language Models [114.36392560711022]
Large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.
Recent works have proposed algorithms to detect LLM-generated text and protect LLMs.
We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation.
arXiv Detail & Related papers (2023-05-31T10:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.