SRLF: A Stance-aware Reinforcement Learning Framework for Content-based
Rumor Detection on Social Media
- URL: http://arxiv.org/abs/2105.04098v1
- Date: Mon, 10 May 2021 03:58:34 GMT
- Title: SRLF: A Stance-aware Reinforcement Learning Framework for Content-based
Rumor Detection on Social Media
- Authors: Chunyuan Yuan, Wanhui Qian, Qianwen Ma, Wei Zhou, Songlin Hu
- Abstract summary: Early content-based methods focused on finding clues from text and user profiles for rumor detection.
Recent studies combine the stances of users' comments with news content to capture the difference between true and false rumors.
We propose a novel Stance-aware Reinforcement Learning Framework (SRLF) to select high-quality labeled stance data for model training and rumor detection.
- Score: 15.985224010346593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid development of social media changes the lifestyle of people and
simultaneously provides an ideal place for publishing and disseminating rumors,
which severely exacerbates social panic and triggers a crisis of social trust.
Early content-based methods focused on finding clues from the text and user
profiles for rumor detection. Recent studies combine the stances of users'
comments with news content to capture the difference between true and false
rumors. Although the user's stance is effective for rumor detection, the manual
labeling process is time-consuming and labor-intensive, which limits the
application of utilizing it to facilitate rumor detection.
In this paper, we first finetune a pre-trained BERT model on a small labeled
dataset and leverage this model to annotate weak stance labels for users'
comment data to overcome the problem mentioned above. Then, we propose a novel
Stance-aware Reinforcement Learning Framework (SRLF) to select high-quality
labeled stance data for model training and rumor detection. Both the stance
selection and rumor detection tasks are optimized simultaneously to promote
both tasks mutually. We conduct experiments on two commonly used real-world
datasets. The experimental results demonstrate that our framework outperforms
the state-of-the-art models significantly, which confirms the effectiveness of
the proposed framework.
Related papers
- Rumor Detection with a novel graph neural network approach [12.42658463552019]
We propose a new detection model, that jointly learns the representations of user correlation and information propagation to detect rumors on social media.
Specifically, we leverage graph neural networks to learn the representations of user correlation from a bipartite graph.
We show that it requires a high cost for attackers to subvert user correlation pattern, demonstrating the importance of considering user correlation for rumor detection.
arXiv Detail & Related papers (2024-03-24T15:59:47Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - A Unified Contrastive Transfer Framework with Propagation Structure for
Boosting Low-Resource Rumor Detection [11.201348902221257]
existing rumor detection algorithms show promising performance on yesterday's news.
Due to a lack of substantial training data and prior expert knowledge, they are poor at spotting rumors concerning unforeseen events.
We propose a unified contrastive transfer framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced with only few-shot annotations.
arXiv Detail & Related papers (2023-04-04T03:13:03Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Probing Spurious Correlations in Popular Event-Based Rumor Detection
Benchmarks [28.550143417847373]
Open-source benchmark datasets suffer from spurious correlations, which are ignored by existing studies.
We propose event-separated rumor detection as a solution to eliminate spurious cues.
Our method outperforms existing baselines in terms of effectiveness, efficiency and generalizability.
arXiv Detail & Related papers (2022-09-19T07:11:36Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Ensemble Deep Learning on Time-Series Representation of Tweets for Rumor
Detection in Social Media [2.6514980627603006]
We propose an ensemble model, which performs majority-voting on a collection of predictions by deep neural networks using time-series vector representation of Twitter data for timely detection of rumors.
Experimental results show that the classification performance has been improved by 7.9% in terms of micro F1 score compared to the baselines.
arXiv Detail & Related papers (2020-04-26T23:13:31Z) - RP-DNN: A Tweet level propagation context based deep neural networks for
early rumor detection in Social Media [3.253418861583211]
Early rumor detection (ERD) on social media platform is very challenging when limited, incomplete and noisy information is available.
We present a novel hybrid neural network architecture, which combines a character-based bidirectional language model and stacked Long Short-Term Memory (LSTM) networks.
Our models achieve state-of-the-art(SoA) performance for detecting unseen rumors on large augmented data which covers more than 12 events and 2,967 rumors.
arXiv Detail & Related papers (2020-02-28T12:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.