Causal Understanding of Fake News Dissemination on Social Media
- URL: http://arxiv.org/abs/2010.10580v2
- Date: Wed, 14 Jul 2021 23:14:07 GMT
- Title: Causal Understanding of Fake News Dissemination on Social Media
- Authors: Lu Cheng, Ruocheng Guo, Kai Shu, Huan Liu
- Abstract summary: We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
- Score: 50.4854427067898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed remarkable progress towards computational fake
news detection. To mitigate its negative impact, we argue that it is critical
to understand what user attributes potentially cause users to share fake news.
The key to this causal-inference problem is to identify confounders --
variables that cause spurious associations between treatments (e.g., user
attributes) and outcome (e.g., user susceptibility). In fake news
dissemination, confounders can be characterized by fake news sharing behavior
that inherently relates to user attributes and online activities. Learning such
user behavior is typically subject to selection bias in users who are
susceptible to share news on social media. Drawing on causal inference
theories, we first propose a principled approach to alleviating selection bias
in fake news dissemination. We then consider the learned unbiased fake news
sharing behavior as the surrogate confounder that can fully capture the causal
links between user attributes and user susceptibility. We theoretically and
empirically characterize the effectiveness of the proposed approach and find
that it could be useful in protecting society from the perils of fake news.
Related papers
- Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Mining User-aware Multi-Relations for Fake News Detection in Large Scale
Online Social Networks [25.369320307526362]
credible users are more likely to share trustworthy news, while untrusted users have a higher probability of spreading untrustworthy news.
We construct a dual-layer graph to extract multiple relations of news and users in social networks to derive rich information for detecting fake news.
We propose a fake news detection model named Us-DeFake, which learns the propagation features of news in the news layer and the interaction features of users in the user layer.
arXiv Detail & Related papers (2022-12-21T05:30:35Z) - Who Shares Fake News? Uncovering Insights from Social Media Users' Post Histories [0.0]
We propose that social-media users' own post histories are an underused resource for studying fake-news sharing.
We identify cues that distinguish fake-news sharers, predict those most likely to share fake news, and identify promising constructs to build interventions.
arXiv Detail & Related papers (2022-03-20T14:26:20Z) - FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News [0.2741266294612776]
This work highlights a series of pitfalls that can influence human annotators when building false news datasets.
It also challenges the common rationale of AI that suggest users to read the full article before re-sharing.
arXiv Detail & Related papers (2021-10-22T12:02:16Z) - Profiling Fake News Spreaders on Social Media through Psychological and
Motivational Factors [26.942545715296983]
We study the characteristics and motivational factors of fake news spreaders on social media.
We then perform a series of experiments to determine if fake news spreaders can be found to exhibit different characteristics than other users.
arXiv Detail & Related papers (2021-08-24T20:27:38Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.