Predicting Viral Rumors and Vulnerable Users for Infodemic Surveillance
- URL: http://arxiv.org/abs/2401.09724v1
- Date: Thu, 18 Jan 2024 04:57:12 GMT
- Title: Predicting Viral Rumors and Vulnerable Users for Infodemic Surveillance
- Authors: Xuan Zhang, Wei Gao
- Abstract summary: We propose a novel approach to predict viral rumors and vulnerable users using a unified graph neural network model.
We pre-train network-based user embeddings and leverage a cross-attention mechanism between users and posts.
We also construct two datasets with ground-truth annotations on information virality and user vulnerability in rumor and non-rumor events.
- Score: 9.099277246096861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the age of the infodemic, it is crucial to have tools for effectively
monitoring the spread of rampant rumors that can quickly go viral, as well as
identifying vulnerable users who may be more susceptible to spreading such
misinformation. This proactive approach allows for timely preventive measures
to be taken, mitigating the negative impact of false information on society. We
propose a novel approach to predict viral rumors and vulnerable users using a
unified graph neural network model. We pre-train network-based user embeddings
and leverage a cross-attention mechanism between users and posts, together with
a community-enhanced vulnerability propagation (CVP) method to improve user and
propagation graph representations. Furthermore, we employ two multi-task
training strategies to mitigate negative transfer effects among tasks in
different settings, enhancing the overall performance of our approach. We also
construct two datasets with ground-truth annotations on information virality
and user vulnerability in rumor and non-rumor events, which are automatically
derived from existing rumor detection datasets. Extensive evaluation results of
our joint learning model confirm its superiority over strong baselines in all
three tasks: rumor detection, virality prediction, and user vulnerability
scoring. For instance, compared to the best baselines based on the Weibo
dataset, our model makes 3.8\% and 3.0\% improvements on Accuracy and MacF1 for
rumor detection, and reduces mean squared error (MSE) by 23.9\% and 16.5\% for
virality prediction and user vulnerability scoring, respectively. Our findings
suggest that our approach effectively captures the correlation between rumor
virality and user vulnerability, leveraging this information to improve
prediction performance and provide a valuable tool for infodemic surveillance.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Enhancing Pre-Trained Language Models for Vulnerability Detection via Semantic-Preserving Data Augmentation [4.374800396968465]
We propose a data augmentation technique aimed at enhancing the performance of pre-trained language models for vulnerability detection.
By incorporating our augmented dataset in fine-tuning a series of representative code pre-trained models, up to 10.1% increase in accuracy and 23.6% increase in F1 can be achieved.
arXiv Detail & Related papers (2024-09-30T21:44:05Z) - Rumor Detection with a novel graph neural network approach [12.42658463552019]
We propose a new detection model, that jointly learns the representations of user correlation and information propagation to detect rumors on social media.
Specifically, we leverage graph neural networks to learn the representations of user correlation from a bipartite graph.
We show that it requires a high cost for attackers to subvert user correlation pattern, demonstrating the importance of considering user correlation for rumor detection.
arXiv Detail & Related papers (2024-03-24T15:59:47Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - An Unbiased Transformer Source Code Learning with Semantic Vulnerability
Graph [3.3598755777055374]
Current vulnerability screening techniques are ineffective at identifying novel vulnerabilities or providing developers with code vulnerability and classification.
To address these issues, we propose a joint multitasked unbiased vulnerability classifier comprising a transformer "RoBERTa" and graph convolution neural network (GCN)
We present a training process utilizing a semantic vulnerability graph (SVG) representation from source code, created by integrating edges from a sequential flow, control flow, and data flow, as well as a novel flow dubbed Poacher Flow (PF)
arXiv Detail & Related papers (2023-04-17T20:54:14Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Modelling Adversarial Noise for Adversarial Defense [96.56200586800219]
adversarial defenses typically focus on exploiting adversarial examples to remove adversarial noise or train an adversarially robust target model.
Motivated by that the relationship between adversarial data and natural data can help infer clean data from adversarial data to obtain the final correct prediction.
We study to model adversarial noise to learn the transition relationship in the label space for using adversarial labels to improve adversarial accuracy.
arXiv Detail & Related papers (2021-09-21T01:13:26Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Grasping Detection Network with Uncertainty Estimation for
Confidence-Driven Semi-Supervised Domain Adaptation [17.16216430459064]
This paper presents an approach enabling the easy domain adaptation through a novel grasping detection network with confidence-driven semi-supervised learning.
The proposed grasping detection network specially provides a prediction uncertainty estimation mechanism by leveraging on Feature Pyramid Network (FPN), and the mean-teacher semi-supervised learning utilizes such uncertainty information to emphasizing the consistency loss only for those unlabelled data with high confidence.
Our results show that the proposed network can achieve high success rate on the Cornell grasping dataset, and for domain adaptation with very limited data, the confidence-driven mean teacher outperforms the original mean teacher and direct training by more than 10% in evaluation
arXiv Detail & Related papers (2020-08-20T07:42:45Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.