Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models
- URL: http://arxiv.org/abs/2405.16631v1
- Date: Sun, 26 May 2024 17:09:23 GMT
- Title: Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models
- Authors: Qiong Nan, Qiang Sheng, Juan Cao, Beizhe Hu, Danding Wang, Jintao Li,
- Abstract summary: Comments could reflect users' opinions, stances, and emotions and models deepen their understanding of fake news.
Due to exposure bias and users' different willingness to comment, it is not easy to obtain diverse comments in reality.
We propose GenFEND, a generated feedback-enhanced detection framework, which generates comments by prompting LLMs with diverse user profiles.
- Score: 17.612043837566134
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fake news detection plays a crucial role in protecting social media users and maintaining a healthy news ecosystem. Among existing works, comment-based fake news detection methods are empirically shown as promising because comments could reflect users' opinions, stances, and emotions and deepen models' understanding of fake news. Unfortunately, due to exposure bias and users' different willingness to comment, it is not easy to obtain diverse comments in reality, especially for early detection scenarios. Without obtaining the comments from the ``silent'' users, the perceived opinions may be incomplete, subsequently affecting news veracity judgment. In this paper, we explore the possibility of finding an alternative source of comments to guarantee the availability of diverse comments, especially those from silent users. Specifically, we propose to adopt large language models (LLMs) as a user simulator and comment generator, and design GenFEND, a generated feedback-enhanced detection framework, which generates comments by prompting LLMs with diverse user profiles and aggregating generated comments from multiple subpopulation groups. Experiments demonstrate the effectiveness of GenFEND and further analysis shows that the generated comments cover more diverse users and could even be more effective than actual comments.
Related papers
- fakenewsbr: A Fake News Detection Platform for Brazilian Portuguese [0.6775616141339018]
This paper presents a comprehensive study on detecting fake news in Brazilian Portuguese.
We propose a machine learning-based approach that leverages natural language processing techniques, including TF-IDF and Word2Vec.
We develop a user-friendly web platform, fakenewsbr.com, to facilitate the verification of news articles' veracity.
arXiv Detail & Related papers (2023-09-20T04:10:03Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Mitigating Human and Computer Opinion Fraud via Contrastive Learning [0.0]
We introduce the novel approach towards fake text reviews detection in collaborative filtering recommender systems.
The existing algorithms concentrate on detecting the fake reviews, generated by language models and ignore the texts, written by dishonest users.
We propose the contrastive learning-based architecture, which utilizes the user demographic characteristics, along with the text reviews, as the additional evidence against fakes.
arXiv Detail & Related papers (2023-01-08T12:02:28Z) - Personalized Prediction of Offensive News Comments by Considering the
Characteristics of Commenters [0.0]
This study aims to predict such offensive comments to improve the quality of the experience of the reader while reading comments.
By considering the diversity of the readers' values, the proposed method predicts offensive news comments for each reader based on the feedback from a small number of news comments that the reader rated as "offensive" in the past.
The experimental results of the proposed method show that prediction can be personalized even when the amount of readers' feedback data used in the prediction is limited.
arXiv Detail & Related papers (2022-12-26T16:19:03Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Who Shares Fake News? Uncovering Insights from Social Media Users' Post Histories [0.0]
We propose that social-media users' own post histories are an underused resource for studying fake-news sharing.
We identify cues that distinguish fake-news sharers, predict those most likely to share fake news, and identify promising constructs to build interventions.
arXiv Detail & Related papers (2022-03-20T14:26:20Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Viable Threat on News Reading: Generating Biased News Using Natural
Language Models [49.90665530780664]
We show that publicly available language models can reliably generate biased news content based on an input original news.
We also show that a large number of high-quality biased news articles can be generated using controllable text generation.
arXiv Detail & Related papers (2020-10-05T16:55:39Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.